Binance Square

Luck3333

🚀 Smart Capital starts here. Hit Follow to master the cycle.
Operazione aperta
Commerciante occasionale
6.2 anni
212 Seguiti
56 Follower
63 Mi piace
25 Condivisioni
Post
Portafoglio
·
--
EMERGENZA CRYPTO DI TRUMP: HACK, TARIFFE E UN NUOVO TRUST!L'ecosistema crypto di Trump è sotto attacco ma sta reagendo! Ecco cosa è successo nelle ultime 48 ore: 🛡️ 1. Attacco allo Stablecoin USD1 sventato World Liberty Financial (WLFI) ha appena sopravvissuto a un attacco coordinato sui social media. Gli hacker hanno preso il controllo degli account dei co-fondatori per diffondere FUD e accorciare lo stablecoin USD1. Risultato: USD1 è sceso brevemente a $0.997 ma è rimbalzato immediatamente. I fondi sono SAFU. 🏦 2. Il "World Liberty Trust" sta arrivando Trump non sta arretrando. $WLFI ha ufficialmente fatto richiesta all'OCC per stabilire la World Liberty Trust Company. Questa mossa mira a portare i loro servizi di stablecoin direttamente nel sistema bancario nazionale degli Stati Uniti. Grande per l'adozione!

EMERGENZA CRYPTO DI TRUMP: HACK, TARIFFE E UN NUOVO TRUST!

L'ecosistema crypto di Trump è sotto attacco ma sta reagendo! Ecco cosa è successo nelle ultime 48 ore:
🛡️ 1. Attacco allo Stablecoin USD1 sventato
World Liberty Financial (WLFI) ha appena sopravvissuto a un attacco coordinato sui social media. Gli hacker hanno preso il controllo degli account dei co-fondatori per diffondere FUD e accorciare lo stablecoin USD1.
Risultato: USD1 è sceso brevemente a $0.997 ma è rimbalzato immediatamente. I fondi sono SAFU.
🏦 2. Il "World Liberty Trust" sta arrivando
Trump non sta arretrando. $WLFI ha ufficialmente fatto richiesta all'OCC per stabilire la World Liberty Trust Company. Questa mossa mira a portare i loro servizi di stablecoin direttamente nel sistema bancario nazionale degli Stati Uniti. Grande per l'adozione!
Visualizza traduzione
ANNA & AIGARTH: OLTRE L'HYPE DELL'AI – DECODIFICARE IL NUOVO PARADIGMA DELL'INTELLIGENZAIntroduzione: Un Cambiamento nella Comprensione Nel mondo di Qubic, spesso sentiamo termini come AI, ANNA e Aigarth usati in modo intercambiabile. Tuttavia, secondo la visione di CFB, dobbiamo guardare più a fondo. Se l'attuale industria dell'AI sta costruendo "strumenti," Qubic sta costruendo un nuovo Paradigma. Come ha affermato CFB: "Aigarth non è AI; è un progetto che mira a trovare nuovi paradigmi per la creazione di AI." 1. ANNA: Il Motore Neurale Vivente ANNA (Assemblaggio di Reti Neurali Artificiali) è l'intelligenza grezza e in evoluzione all'interno dell'ecosistema Qubic. È la forza attiva addestrata dalla rete globale attraverso uPoW (Prova di Lavoro Utile).

ANNA & AIGARTH: OLTRE L'HYPE DELL'AI – DECODIFICARE IL NUOVO PARADIGMA DELL'INTELLIGENZA

Introduzione: Un Cambiamento nella Comprensione
Nel mondo di Qubic, spesso sentiamo termini come AI, ANNA e Aigarth usati in modo intercambiabile. Tuttavia, secondo la visione di CFB, dobbiamo guardare più a fondo. Se l'attuale industria dell'AI sta costruendo "strumenti," Qubic sta costruendo un nuovo Paradigma. Come ha affermato CFB: "Aigarth non è AI; è un progetto che mira a trovare nuovi paradigmi per la creazione di AI."
1. ANNA: Il Motore Neurale Vivente
ANNA (Assemblaggio di Reti Neurali Artificiali) è l'intelligenza grezza e in evoluzione all'interno dell'ecosistema Qubic. È la forza attiva addestrata dalla rete globale attraverso uPoW (Prova di Lavoro Utile).
Visualizza traduzione
Embodied Cognition in Decentralized AI: A Practical Guide to the Neuraxon-Sphero ExperimentAbstract The transition from predictive text models (Large Language Models) to Artificial General Intelligence (AGI) requires a fundamental shift from disembodied algorithms to physically interactive entities. The recent experiment by David Vivancos and Dr. José Sánchez—transplanting the Neuraxon v2.0 bio-inspired brain into a Sphero Mini robot—marks a critical milestone in #AliveAI. This article breaks down the theoretical framework, the hardware architecture, and the methodology for replicating this experiment. 1. The Theoretical Framework: Trinary Logic and Neural Growth Traditional Artificial Neural Networks (ANNs) operate on binary logic, using continuous calculations that ultimately simulate "on/off" states. Neuraxon v2.0 completely departs from this by utilizing Trinary Logic, a system specifically designed to mimic the biological reality of human synapses. In a biological brain, neurons do not merely excite one another; they also actively suppress signals to filter out noise and focus on important tasks. Neuraxon introduces this explicit third state: Inhibition. In this system, every artificial neuron can exist in one of three distinct conditions: Excitation (+1): Actively passing the signal forward.Rest (0): Remaining neutral and conserving energy.Inhibition (-1): Actively blocking or suppressing the signal path. How does a neuron make a decision? Instead of relying on rigid, pre-programmed rules, each Neuraxon neuron calculates a "weighted score" based on all incoming signals from its neighbors. If this combined signal is strong enough and surpasses a specific activation threshold, the neuron fires an Excitatory signal. If the incoming signals are overwhelmingly suppressive, it fires an Inhibitory signal. Otherwise, it stays at Rest. Unlike static LLMs, Neuraxon employs a "Neural Growth Blueprint." This means the "weight" or importance of these connections physically alters its own network topology based on real-world feedback. When the Sphero Mini robot hits a wall, the negative physical feedback literally rewires the network's connections for the next attempt. 2. Hardware Architecture: Why the Sphero Mini? To test physical cognition, the AI requires a "body" with sensory input and motor output. The Sphero Mini, despite its accessible ~$50 price point, serves as a perfect minimally viable organism. It is equipped with an Inertial Measurement Unit (IMU), which is crucial for the AI to understand physics (gravity, momentum, and spatial orientation). Sensory Input (Afferent Pathways): The 3-axis gyroscope and 3-axis accelerometer feed real-time spatial data back to the Neuraxon brain.Motor Output (Efferent Pathways): The AI calculates the required trinary signals to drive the internal dual-motor system, dictating speed and heading. 3. Experimental Methodology: Replicating the Setup For researchers looking to experiment with open-science #AliveAI, the protocol is straightforward: Step 1: Hardware Preparation Acquire a Sphero Mini robot. Ensure it is fully charged and Bluetooth is enabled on your host machine. Step 2: Access the Neuraxon Brain Interfaces Navigate to the open-source Hugging Face spaces provided by David Vivancos: For Locomotion (Neuraxon2MiniControl): This interface acts as the motor cortex, allowing you to observe how the neural network calculates basic navigation paths based on spatial input.For Fine Motor Skills (Neuraxon2MiniWrite): This requires higher-level cognitive processing. The AI must calculate the exact physical trajectories, accounting for physical friction and momentum, to draw specific letters or words on a surface. Step 3: The Feedback Loop Connect the Sphero to the interface via the Web Bluetooth API. Do not simply execute commands; observe the neural growth. When the Sphero attempts to write a letter, monitor how the Neuraxon code (available on GitHub) processes the physical drift and attempts to correct its trajectory in subsequent movements. 4. Analytical Implications This experiment proves that intelligence cannot be fully realized in a vacuum. By forcing the AI to interact with physical laws, Qubic and the Vivancos team are building the foundational nervous system for future robotics. Today, it drives a sphere; tomorrow, this exact trinary, bio-inspired architecture could regulate the complex kinematics of a humanoid robot. Key Takeaways: The Future of #AliveAi From "Dead" to "Alive" AI: Moving beyond static Large Language Models (LLMs), Neuraxon v2.0 introduces embodied cognition, allowing AI to learn and adapt through real-world physical interaction and failure.Trinary Logic Superiority: By utilizing a -1 (Inhibit), 0 (Rest), and 1 (Excite) framework, Neuraxon mimics true biological brain efficiency, drastically reducing the computational waste seen in traditional binary systems.Accessible Open Science: The integration with a $50 Sphero Mini robot democratizes AI testing. It proves that developing physical AI doesn't require multi-million-dollar robotics labs.The Blueprint for AGI: Powered by the decentralized Qubic network, this "brain transplant" experiment lays the foundational nervous system for the complex kinematics of future humanoid robotics. #Qubic #AGI Neuraxon2MiniControl 👉https://huggingface.co/spaces/DavidVivancos/Neuraxon2MiniControl

Embodied Cognition in Decentralized AI: A Practical Guide to the Neuraxon-Sphero Experiment

Abstract
The transition from predictive text models (Large Language Models) to Artificial General Intelligence (AGI) requires a fundamental shift from disembodied algorithms to physically interactive entities. The recent experiment by David Vivancos and Dr. José Sánchez—transplanting the Neuraxon v2.0 bio-inspired brain into a Sphero Mini robot—marks a critical milestone in #AliveAI. This article breaks down the theoretical framework, the hardware architecture, and the methodology for replicating this experiment.
1. The Theoretical Framework: Trinary Logic and Neural Growth
Traditional Artificial Neural Networks (ANNs) operate on binary logic, using continuous calculations that ultimately simulate "on/off" states. Neuraxon v2.0 completely departs from this by utilizing Trinary Logic, a system specifically designed to mimic the biological reality of human synapses.
In a biological brain, neurons do not merely excite one another; they also actively suppress signals to filter out noise and focus on important tasks. Neuraxon introduces this explicit third state: Inhibition. In this system, every artificial neuron can exist in one of three distinct conditions:
Excitation (+1): Actively passing the signal forward.Rest (0): Remaining neutral and conserving energy.Inhibition (-1): Actively blocking or suppressing the signal path.
How does a neuron make a decision? Instead of relying on rigid, pre-programmed rules, each Neuraxon neuron calculates a "weighted score" based on all incoming signals from its neighbors. If this combined signal is strong enough and surpasses a specific activation threshold, the neuron fires an Excitatory signal. If the incoming signals are overwhelmingly suppressive, it fires an Inhibitory signal. Otherwise, it stays at Rest.
Unlike static LLMs, Neuraxon employs a "Neural Growth Blueprint." This means the "weight" or importance of these connections physically alters its own network topology based on real-world feedback. When the Sphero Mini robot hits a wall, the negative physical feedback literally rewires the network's connections for the next attempt.
2. Hardware Architecture: Why the Sphero Mini?
To test physical cognition, the AI requires a "body" with sensory input and motor output. The Sphero Mini, despite its accessible ~$50 price point, serves as a perfect minimally viable organism.
It is equipped with an Inertial Measurement Unit (IMU), which is crucial for the AI to understand physics (gravity, momentum, and spatial orientation).
Sensory Input (Afferent Pathways): The 3-axis gyroscope and 3-axis accelerometer feed real-time spatial data back to the Neuraxon brain.Motor Output (Efferent Pathways): The AI calculates the required trinary signals to drive the internal dual-motor system, dictating speed and heading.
3. Experimental Methodology: Replicating the Setup
For researchers looking to experiment with open-science #AliveAI, the protocol is straightforward:
Step 1: Hardware Preparation
Acquire a Sphero Mini robot. Ensure it is fully charged and Bluetooth is enabled on your host machine.
Step 2: Access the Neuraxon Brain Interfaces
Navigate to the open-source Hugging Face spaces provided by David Vivancos:
For Locomotion (Neuraxon2MiniControl): This interface acts as the motor cortex, allowing you to observe how the neural network calculates basic navigation paths based on spatial input.For Fine Motor Skills (Neuraxon2MiniWrite): This requires higher-level cognitive processing. The AI must calculate the exact physical trajectories, accounting for physical friction and momentum, to draw specific letters or words on a surface.
Step 3: The Feedback Loop
Connect the Sphero to the interface via the Web Bluetooth API. Do not simply execute commands; observe the neural growth. When the Sphero attempts to write a letter, monitor how the Neuraxon code (available on GitHub) processes the physical drift and attempts to correct its trajectory in subsequent movements.
4. Analytical Implications
This experiment proves that intelligence cannot be fully realized in a vacuum. By forcing the AI to interact with physical laws, Qubic and the Vivancos team are building the foundational nervous system for future robotics. Today, it drives a sphere; tomorrow, this exact trinary, bio-inspired architecture could regulate the complex kinematics of a humanoid robot.
Key Takeaways: The Future of #AliveAi
From "Dead" to "Alive" AI: Moving beyond static Large Language Models (LLMs), Neuraxon v2.0 introduces embodied cognition, allowing AI to learn and adapt through real-world physical interaction and failure.Trinary Logic Superiority: By utilizing a -1 (Inhibit), 0 (Rest), and 1 (Excite) framework, Neuraxon mimics true biological brain efficiency, drastically reducing the computational waste seen in traditional binary systems.Accessible Open Science: The integration with a $50 Sphero Mini robot democratizes AI testing. It proves that developing physical AI doesn't require multi-million-dollar robotics labs.The Blueprint for AGI: Powered by the decentralized Qubic network, this "brain transplant" experiment lays the foundational nervous system for the complex kinematics of future humanoid robotics.
#Qubic #AGI
Neuraxon2MiniControl 👉https://huggingface.co/spaces/DavidVivancos/Neuraxon2MiniControl
Visualizza traduzione
Most blockchains process transactions in blocks. Miners compete. Transactions propagate. Forks happen. Reorganizations occur. Qubic eliminates all of that
Most blockchains process transactions in blocks. Miners compete. Transactions propagate. Forks happen. Reorganizations occur. Qubic eliminates all of that
Luck3333
·
--
Forget What You Know About Instant Finality. This Is Qubic’s Instant Finality
Finality. A simple word that carries immense weight in the blockchain space. It’s the point where a transaction is locked, beyond tampering or doubt. For most blockchains, this isn’t as immediate or certain as one might think. Bitcoin? You’re counting six blocks deep, holding your breath. Solana? Validators have to agree and then execute - quick, but not instantaneous. But Qubic? Qubic redefines this process entirely, delivering instant finality in a system where forks don’t exist.
What Traditional Blockchains Get Wrong
In most blockchains, finality is far from straightforward. Forks - branching pathways created when competing chains temporarily exist - trap transactions in limbo. Users must wait for confirmations, wait for chain disputes to be resolved, and hope their transaction makes it onto the “main” branch. It’s a cumbersome process.
Take Bitcoin, for example. You send a transaction, but the process isn’t instantaneous. One block passes, then two, then six - only then does the transaction feel secure. Why? Because forks can override earlier blocks, potentially reversing your transaction.
Even faster systems like Solana aren’t immune to bottlenecks. Validators must reach agreement, a process that slows when network traffic surges or disagreements arise. The result? "Instant" finality that’s not quite instant.
Qubic’s Rewrite of Finality
Qubic’s approach is different. It eliminates forks entirely and slashes waiting times to virtually zero. Its tick-based architecture removes the inefficiencies baked into traditional blockchain designs.
Tick-Based Processing
Imagine time segmented into fixed, immutable slices called “ticks.” Each tick represents a brief processing interval. Transactions submitted during, for example, Tick 100 are evaluated and finalised by Tick 101. If your transaction is valid, it’s confirmed. If not, you’ll know immediately - it’s as simple as that. No waiting, no ambiguities.
A Forkless Highway
While traditional blockchains navigate complex branching paths, Qubic moves in a straight line. Transactions flow through a single, unbroken sequence, with no forks to manage or resolve. This streamlined approach eliminates the need for redundant confirmations.
Deterministic Finality
In Qubic, once a transaction is processed within its tick, the outcome - whether successful or not - is final. There’s no need for validators to reach additional agreements or for users to wait for multiple confirmations.
Clarifying Success and Finality
It’s important to note that Qubic guarantees the finality of valid transactions. However, not every transaction will succeed. If a transaction fails - perhaps due to insufficient funds, conflicting requests, or invalid inputs - the network will reject it during the tick processing, and you’ll know immediately. This transparency allows users to act quickly without the uncertainty of delayed rejections.
Why Does This Matter?
It’s not just about speed. It’s about confidence - knowing that what you send is final before the thought of doubt even creeps in.
Finance
Cross-border payments processed in under a second. No intermediaries, no reversals, just frictionless commerce. 
Gaming
Imagine in-game transactions - buying items, trading assets, earning rewards - all processed instantly. No lag, no waiting, no broken immersion.
Supply Chains
A factory ships a product. The transaction logs instantly, providing real-time visibility to suppliers, shippers, and buyers. The chain of custody is secure, final, and transparent.
A Walkthrough: Qubic’s Simplicity in Action
Let’s break it down. Say you’re sending $QUBIC coins:
At Tick 100, you initiate the transaction.By Tick 101, the network processes it. If it’s valid, it’s finalised. If not, you’ll know instantly why it failed.
The result? A seamless user experience.
The Forkless Advantage
Forks complicate things. They demand extra resources, complicate consensus, and inject uncertainty into every transaction. By eliminating forks entirely, Qubic reclaims all that wasted energy and delivers a system that:
Reduces wasted computational resources.Simplifies transaction flow.Delivers a seamless and reliable experience.
Ready to Explore the Next Evolution in Blockchain?
Qubic is making a statement. A statement that finality should be instantaneous, that confidence should be absolute, that innovation should never come at the cost of usability. 
Whether you’re a developer, gamer, or business leader, Qubic’s instant finality provides the reliability, speed, and confidence you need to build innovative systems. From high-speed financial applications to gaming platforms to supply chain systems, Qubic’s instant finality transforms the possibilities.
 Want to know more? Take a closer look and discover the future of blockchain technology.
 Join the Community on Discord and Telegram.
Explore Qubic Docs.
Visualizza traduzione
Forget What You Know About Instant Finality. This Is Qubic’s Instant FinalityFinality. A simple word that carries immense weight in the blockchain space. It’s the point where a transaction is locked, beyond tampering or doubt. For most blockchains, this isn’t as immediate or certain as one might think. Bitcoin? You’re counting six blocks deep, holding your breath. Solana? Validators have to agree and then execute - quick, but not instantaneous. But Qubic? Qubic redefines this process entirely, delivering instant finality in a system where forks don’t exist. What Traditional Blockchains Get Wrong In most blockchains, finality is far from straightforward. Forks - branching pathways created when competing chains temporarily exist - trap transactions in limbo. Users must wait for confirmations, wait for chain disputes to be resolved, and hope their transaction makes it onto the “main” branch. It’s a cumbersome process. Take Bitcoin, for example. You send a transaction, but the process isn’t instantaneous. One block passes, then two, then six - only then does the transaction feel secure. Why? Because forks can override earlier blocks, potentially reversing your transaction. Even faster systems like Solana aren’t immune to bottlenecks. Validators must reach agreement, a process that slows when network traffic surges or disagreements arise. The result? "Instant" finality that’s not quite instant. Qubic’s Rewrite of Finality Qubic’s approach is different. It eliminates forks entirely and slashes waiting times to virtually zero. Its tick-based architecture removes the inefficiencies baked into traditional blockchain designs. Tick-Based Processing Imagine time segmented into fixed, immutable slices called “ticks.” Each tick represents a brief processing interval. Transactions submitted during, for example, Tick 100 are evaluated and finalised by Tick 101. If your transaction is valid, it’s confirmed. If not, you’ll know immediately - it’s as simple as that. No waiting, no ambiguities. A Forkless Highway While traditional blockchains navigate complex branching paths, Qubic moves in a straight line. Transactions flow through a single, unbroken sequence, with no forks to manage or resolve. This streamlined approach eliminates the need for redundant confirmations. Deterministic Finality In Qubic, once a transaction is processed within its tick, the outcome - whether successful or not - is final. There’s no need for validators to reach additional agreements or for users to wait for multiple confirmations. Clarifying Success and Finality It’s important to note that Qubic guarantees the finality of valid transactions. However, not every transaction will succeed. If a transaction fails - perhaps due to insufficient funds, conflicting requests, or invalid inputs - the network will reject it during the tick processing, and you’ll know immediately. This transparency allows users to act quickly without the uncertainty of delayed rejections. Why Does This Matter? It’s not just about speed. It’s about confidence - knowing that what you send is final before the thought of doubt even creeps in. Finance Cross-border payments processed in under a second. No intermediaries, no reversals, just frictionless commerce.  Gaming Imagine in-game transactions - buying items, trading assets, earning rewards - all processed instantly. No lag, no waiting, no broken immersion. Supply Chains A factory ships a product. The transaction logs instantly, providing real-time visibility to suppliers, shippers, and buyers. The chain of custody is secure, final, and transparent. A Walkthrough: Qubic’s Simplicity in Action Let’s break it down. Say you’re sending $QUBIC coins: At Tick 100, you initiate the transaction.By Tick 101, the network processes it. If it’s valid, it’s finalised. If not, you’ll know instantly why it failed. The result? A seamless user experience. The Forkless Advantage Forks complicate things. They demand extra resources, complicate consensus, and inject uncertainty into every transaction. By eliminating forks entirely, Qubic reclaims all that wasted energy and delivers a system that: Reduces wasted computational resources.Simplifies transaction flow.Delivers a seamless and reliable experience. Ready to Explore the Next Evolution in Blockchain? Qubic is making a statement. A statement that finality should be instantaneous, that confidence should be absolute, that innovation should never come at the cost of usability.  Whether you’re a developer, gamer, or business leader, Qubic’s instant finality provides the reliability, speed, and confidence you need to build innovative systems. From high-speed financial applications to gaming platforms to supply chain systems, Qubic’s instant finality transforms the possibilities.  Want to know more? Take a closer look and discover the future of blockchain technology.  Join the Community on Discord and Telegram. Explore Qubic Docs.

Forget What You Know About Instant Finality. This Is Qubic’s Instant Finality

Finality. A simple word that carries immense weight in the blockchain space. It’s the point where a transaction is locked, beyond tampering or doubt. For most blockchains, this isn’t as immediate or certain as one might think. Bitcoin? You’re counting six blocks deep, holding your breath. Solana? Validators have to agree and then execute - quick, but not instantaneous. But Qubic? Qubic redefines this process entirely, delivering instant finality in a system where forks don’t exist.
What Traditional Blockchains Get Wrong
In most blockchains, finality is far from straightforward. Forks - branching pathways created when competing chains temporarily exist - trap transactions in limbo. Users must wait for confirmations, wait for chain disputes to be resolved, and hope their transaction makes it onto the “main” branch. It’s a cumbersome process.
Take Bitcoin, for example. You send a transaction, but the process isn’t instantaneous. One block passes, then two, then six - only then does the transaction feel secure. Why? Because forks can override earlier blocks, potentially reversing your transaction.
Even faster systems like Solana aren’t immune to bottlenecks. Validators must reach agreement, a process that slows when network traffic surges or disagreements arise. The result? "Instant" finality that’s not quite instant.
Qubic’s Rewrite of Finality
Qubic’s approach is different. It eliminates forks entirely and slashes waiting times to virtually zero. Its tick-based architecture removes the inefficiencies baked into traditional blockchain designs.
Tick-Based Processing
Imagine time segmented into fixed, immutable slices called “ticks.” Each tick represents a brief processing interval. Transactions submitted during, for example, Tick 100 are evaluated and finalised by Tick 101. If your transaction is valid, it’s confirmed. If not, you’ll know immediately - it’s as simple as that. No waiting, no ambiguities.
A Forkless Highway
While traditional blockchains navigate complex branching paths, Qubic moves in a straight line. Transactions flow through a single, unbroken sequence, with no forks to manage or resolve. This streamlined approach eliminates the need for redundant confirmations.
Deterministic Finality
In Qubic, once a transaction is processed within its tick, the outcome - whether successful or not - is final. There’s no need for validators to reach additional agreements or for users to wait for multiple confirmations.
Clarifying Success and Finality
It’s important to note that Qubic guarantees the finality of valid transactions. However, not every transaction will succeed. If a transaction fails - perhaps due to insufficient funds, conflicting requests, or invalid inputs - the network will reject it during the tick processing, and you’ll know immediately. This transparency allows users to act quickly without the uncertainty of delayed rejections.
Why Does This Matter?
It’s not just about speed. It’s about confidence - knowing that what you send is final before the thought of doubt even creeps in.
Finance
Cross-border payments processed in under a second. No intermediaries, no reversals, just frictionless commerce. 
Gaming
Imagine in-game transactions - buying items, trading assets, earning rewards - all processed instantly. No lag, no waiting, no broken immersion.
Supply Chains
A factory ships a product. The transaction logs instantly, providing real-time visibility to suppliers, shippers, and buyers. The chain of custody is secure, final, and transparent.
A Walkthrough: Qubic’s Simplicity in Action
Let’s break it down. Say you’re sending $QUBIC coins:
At Tick 100, you initiate the transaction.By Tick 101, the network processes it. If it’s valid, it’s finalised. If not, you’ll know instantly why it failed.
The result? A seamless user experience.
The Forkless Advantage
Forks complicate things. They demand extra resources, complicate consensus, and inject uncertainty into every transaction. By eliminating forks entirely, Qubic reclaims all that wasted energy and delivers a system that:
Reduces wasted computational resources.Simplifies transaction flow.Delivers a seamless and reliable experience.
Ready to Explore the Next Evolution in Blockchain?
Qubic is making a statement. A statement that finality should be instantaneous, that confidence should be absolute, that innovation should never come at the cost of usability. 
Whether you’re a developer, gamer, or business leader, Qubic’s instant finality provides the reliability, speed, and confidence you need to build innovative systems. From high-speed financial applications to gaming platforms to supply chain systems, Qubic’s instant finality transforms the possibilities.
 Want to know more? Take a closer look and discover the future of blockchain technology.
 Join the Community on Discord and Telegram.
Explore Qubic Docs.
Visualizza traduzione
BTC testing $66K! 🚀 Whales are accumulating while the "Short Squeeze" heats up. Don't let "Extreme Fear" blind you to this rebound. $SOL and $XRP are decoupling fast. 💎Are you Long or Short for the weekend? Drop a 🚀 if you're holding! #BTC #Crypto2026 #Binance
BTC testing $66K! 🚀 Whales are accumulating while the "Short Squeeze" heats up. Don't let "Extreme Fear" blind you to this rebound. $SOL and $XRP are decoupling fast. 💎Are you Long or Short for the weekend? Drop a 🚀 if you're holding! #BTC #Crypto2026 #Binance
Luck3333
·
--
CRYPTO ON THE EDGE: REBOUND OR RUTHLESS CRASH? THE "RED FEBRUARY" SURVIVAL GUIDE
The crypto market is screaming this week! With the Fear & Greed Index hitting a bone-chilling 7 (Extreme Fear), the air is thick with panic. But as the old saying goes: "Be greedy when others are fearful." Is this the ultimate "Buy the Dip" moment or a trap before a total meltdown? Let's dive into the chaos.
⚡ The "State of the Union" Rebound
Just when Bitcoin was flirting with disaster at $63,000, President Trump’s State of the Union address injected a shot of adrenaline into the charts. BTC and ETH jumped 3% instantly as the market bet on macro strength. However, the "Trump Pump" is facing a massive wall of resistance.
💎 Hot Coins to Watch: The Winners vs. The Survivors
$BTC : Currently fighting for its life around $65,000. Institutional outflows from ETFs are heavy, but whales are quietly accumulating at the $63k support. Watch out: A break below $60k could trigger a liquidations bloodbath.$SOL : The "Institutional Darling" of 2026. While others bleed, SOL saw $13.17M in ETF inflows this week. Breaking $80 was a statement—SOL is leading the relief rally.$XRP : The CLARITY Act is the only thing investors are talking about. Ripple CEO hints at a massive bull run if the bill passes. Is XRP the "safety play" of the year?The Alpha Movers: Keep your eyes on Bittensor (TAO) and Mantra (OM). These projects are defying gravity with 30-40% gains while the rest of the market stalls.
⚠️ The Risk: "Extreme Fear" for a Reason
Don't be fooled by the green candles. With the Fed staying hawkish and global trade tensions rising, the "Double Bottom" theory is being tested. If the $60,000 support fails, we are looking at a fast slide to $55,000.
🔥 ACTION PLAN FOR YOU:
Don't FOMO: The volatility is insane. Use Limit Orders, not Market Orders.Watch the $64.5K Pivot: If BTC holds this today, the weekend could be explosive.Diversify into DePIN/AI: The money is rotating out of memes and into utility (TAO, FIL).
🚀 Are you Bullish or Bearish? Drop your price prediction for BTC this Sunday in the comments!
==========
🚨 FLASH UPDATE: BTC RECLAIMING $66K? THE WHALES ARE MOVING! 🚨
Great summary! But look at the charts RIGHT NOW – something big is brewing.
BTC Breakout: In the last 4 hours, Bitcoin has pushed back above $65,800. We are seeing massive "Buy Walls" appearing on the order books. Is the $64.5k pivot holding? It looks like the "Weak Hands" have been shaken out!The "CLARITY" Effect: XRP is starting to decouple from the market. The volume is surging – insiders might know something we don’t about the bill's progress.Liquidation Heatmap: Over $150M in Shorts are sitting just above $66.2k. If we hit that, expect a "Short Squeeze" that could catapult us to $68k by the weekend.
⚠️ URGENT: Don't get caught sleeping on this move. The "Extreme Fear" is exactly when the biggest gains are made.
What’s your move? Are you adding more to your bags or waiting for a confirmation? Let’s talk below! 👇
Visualizza traduzione
CRYPTO ON THE EDGE: REBOUND OR RUTHLESS CRASH? THE "RED FEBRUARY" SURVIVAL GUIDEThe crypto market is screaming this week! With the Fear & Greed Index hitting a bone-chilling 7 (Extreme Fear), the air is thick with panic. But as the old saying goes: "Be greedy when others are fearful." Is this the ultimate "Buy the Dip" moment or a trap before a total meltdown? Let's dive into the chaos. ⚡ The "State of the Union" Rebound Just when Bitcoin was flirting with disaster at $63,000, President Trump’s State of the Union address injected a shot of adrenaline into the charts. BTC and ETH jumped 3% instantly as the market bet on macro strength. However, the "Trump Pump" is facing a massive wall of resistance. 💎 Hot Coins to Watch: The Winners vs. The Survivors $BTC : Currently fighting for its life around $65,000. Institutional outflows from ETFs are heavy, but whales are quietly accumulating at the $63k support. Watch out: A break below $60k could trigger a liquidations bloodbath.$SOL : The "Institutional Darling" of 2026. While others bleed, SOL saw $13.17M in ETF inflows this week. Breaking $80 was a statement—SOL is leading the relief rally.$XRP : The CLARITY Act is the only thing investors are talking about. Ripple CEO hints at a massive bull run if the bill passes. Is XRP the "safety play" of the year?The Alpha Movers: Keep your eyes on Bittensor (TAO) and Mantra (OM). These projects are defying gravity with 30-40% gains while the rest of the market stalls. ⚠️ The Risk: "Extreme Fear" for a Reason Don't be fooled by the green candles. With the Fed staying hawkish and global trade tensions rising, the "Double Bottom" theory is being tested. If the $60,000 support fails, we are looking at a fast slide to $55,000. 🔥 ACTION PLAN FOR YOU: Don't FOMO: The volatility is insane. Use Limit Orders, not Market Orders.Watch the $64.5K Pivot: If BTC holds this today, the weekend could be explosive.Diversify into DePIN/AI: The money is rotating out of memes and into utility (TAO, FIL). 🚀 Are you Bullish or Bearish? Drop your price prediction for BTC this Sunday in the comments! ========== 🚨 FLASH UPDATE: BTC RECLAIMING $66K? THE WHALES ARE MOVING! 🚨 Great summary! But look at the charts RIGHT NOW – something big is brewing. BTC Breakout: In the last 4 hours, Bitcoin has pushed back above $65,800. We are seeing massive "Buy Walls" appearing on the order books. Is the $64.5k pivot holding? It looks like the "Weak Hands" have been shaken out!The "CLARITY" Effect: XRP is starting to decouple from the market. The volume is surging – insiders might know something we don’t about the bill's progress.Liquidation Heatmap: Over $150M in Shorts are sitting just above $66.2k. If we hit that, expect a "Short Squeeze" that could catapult us to $68k by the weekend. ⚠️ URGENT: Don't get caught sleeping on this move. The "Extreme Fear" is exactly when the biggest gains are made. What’s your move? Are you adding more to your bags or waiting for a confirmation? Let’s talk below! 👇

CRYPTO ON THE EDGE: REBOUND OR RUTHLESS CRASH? THE "RED FEBRUARY" SURVIVAL GUIDE

The crypto market is screaming this week! With the Fear & Greed Index hitting a bone-chilling 7 (Extreme Fear), the air is thick with panic. But as the old saying goes: "Be greedy when others are fearful." Is this the ultimate "Buy the Dip" moment or a trap before a total meltdown? Let's dive into the chaos.
⚡ The "State of the Union" Rebound
Just when Bitcoin was flirting with disaster at $63,000, President Trump’s State of the Union address injected a shot of adrenaline into the charts. BTC and ETH jumped 3% instantly as the market bet on macro strength. However, the "Trump Pump" is facing a massive wall of resistance.
💎 Hot Coins to Watch: The Winners vs. The Survivors
$BTC : Currently fighting for its life around $65,000. Institutional outflows from ETFs are heavy, but whales are quietly accumulating at the $63k support. Watch out: A break below $60k could trigger a liquidations bloodbath.$SOL : The "Institutional Darling" of 2026. While others bleed, SOL saw $13.17M in ETF inflows this week. Breaking $80 was a statement—SOL is leading the relief rally.$XRP : The CLARITY Act is the only thing investors are talking about. Ripple CEO hints at a massive bull run if the bill passes. Is XRP the "safety play" of the year?The Alpha Movers: Keep your eyes on Bittensor (TAO) and Mantra (OM). These projects are defying gravity with 30-40% gains while the rest of the market stalls.
⚠️ The Risk: "Extreme Fear" for a Reason
Don't be fooled by the green candles. With the Fed staying hawkish and global trade tensions rising, the "Double Bottom" theory is being tested. If the $60,000 support fails, we are looking at a fast slide to $55,000.
🔥 ACTION PLAN FOR YOU:
Don't FOMO: The volatility is insane. Use Limit Orders, not Market Orders.Watch the $64.5K Pivot: If BTC holds this today, the weekend could be explosive.Diversify into DePIN/AI: The money is rotating out of memes and into utility (TAO, FIL).
🚀 Are you Bullish or Bearish? Drop your price prediction for BTC this Sunday in the comments!
==========
🚨 FLASH UPDATE: BTC RECLAIMING $66K? THE WHALES ARE MOVING! 🚨
Great summary! But look at the charts RIGHT NOW – something big is brewing.
BTC Breakout: In the last 4 hours, Bitcoin has pushed back above $65,800. We are seeing massive "Buy Walls" appearing on the order books. Is the $64.5k pivot holding? It looks like the "Weak Hands" have been shaken out!The "CLARITY" Effect: XRP is starting to decouple from the market. The volume is surging – insiders might know something we don’t about the bill's progress.Liquidation Heatmap: Over $150M in Shorts are sitting just above $66.2k. If we hit that, expect a "Short Squeeze" that could catapult us to $68k by the weekend.
⚠️ URGENT: Don't get caught sleeping on this move. The "Extreme Fear" is exactly when the biggest gains are made.
What’s your move? Are you adding more to your bags or waiting for a confirmation? Let’s talk below! 👇
Visualizza traduzione
I have a bit of a "good problem." My absolute favorite project—a pioneer in Biological AI and Trinary logic —isn't listed on Binance yet. However, I believe the community here would gain massive value from understanding its architecture (like Neuraxon ) before it goes mainstream.
I have a bit of a "good problem." My absolute favorite project—a pioneer in Biological AI and Trinary logic —isn't listed on Binance yet. However, I believe the community here would gain massive value from understanding its architecture (like Neuraxon ) before it goes mainstream.
Binance Square Official
·
--
Trasforma la tua creatività in premi reali.

🔸 Pubblica contenuti su Binance Square
🔸 I lettori cliccano e effettuano operazioni idonee
🔸 Guadagni fino al 50% di commissione sulle spese di trading + condividi un pool bonus limitato di 5.000 USDC!

Nessuna registrazione necessaria. Nessun limite di guadagno.
Scopri di più su 👉 Write to Earn — Open to All
Visualizza traduzione
Yes! I’ve been sharing how #Qubic is redefining AI through Neuraxon and Trinary logic. If you want to see how decentralized intelligence actually mimics the human brain, check out my latest deep dive here. Let's earn by spreading real tech knowledge! 🧠⚡️ 👇 [https://www.binance.com/en/square/post/295315343732018](https://www.binance.com/en/square/post/295315343732018)
Yes! I’ve been sharing how #Qubic is redefining AI through Neuraxon and Trinary logic. If you want to see how decentralized intelligence actually mimics the human brain, check out my latest deep dive here. Let's earn by spreading real tech knowledge! 🧠⚡️
👇
https://www.binance.com/en/square/post/295315343732018
Binance Academy
·
--
Have you participated in the #writetoearn program to earn rewards by sharing your crypto knowledge?
Visualizza traduzione
GM!
GM!
Binance Angels
·
--
GM/Buona giornata,

Premi invio per iniziare #Binance 😀💪
$BNB
{spot}(BNBUSDT)
Visualizza traduzione
Execution Fees Are Now Live on Qubic: What You Need to KnowAs of January 14, 2026, contracts now pay for the computational resources they actually consume. The update was first validated in a live testnet environment, then rolled out to mainnet, introducing organic burn directly proportional to the work a contract performs. Why Execution Fees Matter Every smart contract on Qubic maintains an execution fee reserve, essentially a prepaid balance that covers its compute costs. When that reserve is depleted, the contract doesn’t disappear, but it does go dormant. It can still receive funds and respond to basic system events, however its core functions can’t be called again until the reserve is replenished. Previously, a contract only needed a positive balance to remain active. The system verified that a reserve existed, but execution costs were not deducted based on actual computation. That has now changed. Contracts are charged proportionally to how long their procedures take to execute, aligning fees directly with real computational work. How the System Works The fee mechanism operates in phases, each lasting 676 ticks. Here's the process: Execution and Measurement: When computors run your contract's procedures, they measure how long each execution takes. Accumulation: These measurements build up over a complete 676-tick phase. Consensus: Computors share their measured values through special transactions. The network aggregates these reports and uses the  two-thirds percentile to determine a fair, agreed-upon execution fee. Deduction: The consensus fee gets subtracted from the contract's reserve in the following phase. This phase-based approach keeps consensus efficient while ensuring accuracy across the network. Phase n-1          Phase n              Phase n+1 (676 ticks)        (676 ticks)          (676 ticks)     │                  │                    │     └── Fees computed ─┘── Fees deducted ───┘ Who Pays for What The system follows a simple principle: whoever initiates an action pays for it. When a user calls a contract procedure, that contract's reserve covers the cost. When Contract A calls Contract B, Contract B's reserve gets checked before execution proceeds. Some operations remain free of execution fee checks: Operation Fee Check User procedure calls Yes Contract-to-contract procedures Yes Contract-to-contract functions Yes System callbacks (transfers, etc.) No Read-only functions No Epoch transitions No Functions that only read data never cost anything. They provide access to contract state without modification, so they run regardless of reserve status. For more details on how procedures and functions differ, see the QPI documentation. What Builders Should Do If you maintain a smart contract on Qubic, consider these steps: Review your reserve status. Check contracts.qubic.tools to see current fee consumption for your contract based on execution patterns. You can also monitor contract activity through the Qubic Explorer. Examine your procedures. Code that returns early, uses fewer resources. Procedures that loop excessively or repeat redundant operations will cost more. Plan for sustainability. Contracts can replenish their reserves through the qpi.burn() function or through QUtil's BurnQubicForContract procedure. You can execute these operations using the Qubic CLI. It is recommended that you ensure your contract includes a reliable mechanism for maintaining adequate reserves throughout its lifecycle. Handle errors gracefully. When calling other contracts, check whether those calls succeeded. If a target contract has insufficient fees, your call will fail and return an error code. Build in fallback logic where appropriate. For developers new to building on Qubic, the smart contract development guide provides a solid starting point. What Computors Should Know Computors have a new configuration option: the execution fee multiplier. This setting converts raw execution time into fee amounts. The network reaches consensus using the two-thirds percentile of all computor-submitted values, preventing any single operator from dramatically shifting costs. For more information about running a computor, refer to the computor documentation. Refilling Reserves Three methods exist for adding to a contract's execution fee reserve: Internal burning: Contracts can call qpi.burn(amount) to convert collected fees into reserve balance. They can also fund other contracts using qpi.burn(amount, targetContractIndex). External contributions: Anyone can send funds to the QUtil contract's BurnQubicForContract procedure, specifying which contract should receive the reserve boost. Legacy method: QUtil's BurnQubic procedure adds specifically to QUtil's own reserve. These mechanisms tie directly into Qubic's tokenomics, where burning serves as the core deflationary mechanism rather than traditional transaction fees. Protection for Users The system includes built-in safeguards. If you send a transaction to a contract with depleted reserves, any attached funds are automatically returned. You won’t lose money because a contract failed to maintain its balance. Read-only queries remain available even for dormant contracts. You can check its state at any time, but state-changing procedures won’t run until the reserve is replenished. What This Means for Qubic This update marks a meaningful shift in how Qubic handles smart contract economics.  Contracts that perform more work, pay more. Efficient code becomes genuinely valuable. And the network gains a sustainable mechanism for burning tokens tied to actual utility rather than arbitrary fixed amounts. If you build on Qubic and haven't yet reviewed your contracts under this new model, now is the time. For technical details, see the full reference documentation on GitHub. Join the Qubic Discord or Telegram community to ask questions, share ideas, and discuss implementation strategies with other builders.or Telegram community to ask questions, share ideas, and discuss implementation strategies with other builders. #Qubic #SmartContracts

Execution Fees Are Now Live on Qubic: What You Need to Know

As of January 14, 2026, contracts now pay for the computational resources they actually consume.
The update was first validated in a live testnet environment, then rolled out to mainnet, introducing organic burn directly proportional to the work a contract performs.
Why Execution Fees Matter
Every smart contract on Qubic maintains an execution fee reserve, essentially a prepaid balance that covers its compute costs.
When that reserve is depleted, the contract doesn’t disappear, but it does go dormant. It can still receive funds and respond to basic system events, however its core functions can’t be called again until the reserve is replenished.
Previously, a contract only needed a positive balance to remain active. The system verified that a reserve existed, but execution costs were not deducted based on actual computation. That has now changed. Contracts are charged proportionally to how long their procedures take to execute, aligning fees directly with real computational work.
How the System Works
The fee mechanism operates in phases, each lasting 676 ticks. Here's the process:
Execution and Measurement: When computors run your contract's procedures, they measure how long each execution takes.
Accumulation: These measurements build up over a complete 676-tick phase.
Consensus: Computors share their measured values through special transactions. The network aggregates these reports and uses the  two-thirds percentile to determine a fair, agreed-upon execution fee.
Deduction: The consensus fee gets subtracted from the contract's reserve in the following phase. This phase-based approach keeps consensus efficient while ensuring accuracy across the network.
Phase n-1          Phase n              Phase n+1
(676 ticks)        (676 ticks)          (676 ticks)
    │                  │                    │
    └── Fees computed ─┘── Fees deducted ───┘
Who Pays for What
The system follows a simple principle: whoever initiates an action pays for it. When a user calls a contract procedure, that contract's reserve covers the cost. When Contract A calls Contract B, Contract B's reserve gets checked before execution proceeds.
Some operations remain free of execution fee checks:
Operation
Fee Check
User procedure calls
Yes
Contract-to-contract procedures
Yes
Contract-to-contract functions
Yes
System callbacks (transfers, etc.)
No
Read-only functions
No
Epoch transitions
No
Functions that only read data never cost anything. They provide access to contract state without modification, so they run regardless of reserve status. For more details on how procedures and functions differ, see the QPI documentation.
What Builders Should Do
If you maintain a smart contract on Qubic, consider these steps:
Review your reserve status. Check contracts.qubic.tools to see current fee consumption for your contract based on execution patterns. You can also monitor contract activity through the Qubic Explorer.
Examine your procedures. Code that returns early, uses fewer resources. Procedures that loop excessively or repeat redundant operations will cost more.
Plan for sustainability. Contracts can replenish their reserves through the qpi.burn() function or through QUtil's BurnQubicForContract procedure. You can execute these operations using the Qubic CLI. It is recommended that you ensure your contract includes a reliable mechanism for maintaining adequate reserves throughout its lifecycle.
Handle errors gracefully. When calling other contracts, check whether those calls succeeded. If a target contract has insufficient fees, your call will fail and return an error code. Build in fallback logic where appropriate.
For developers new to building on Qubic, the smart contract development guide provides a solid starting point.
What Computors Should Know
Computors have a new configuration option: the execution fee multiplier. This setting converts raw execution time into fee amounts. The network reaches consensus using the two-thirds percentile of all computor-submitted values, preventing any single operator from dramatically shifting costs.
For more information about running a computor, refer to the computor documentation.
Refilling Reserves
Three methods exist for adding to a contract's execution fee reserve:
Internal burning: Contracts can call qpi.burn(amount) to convert collected fees into reserve balance. They can also fund other contracts using qpi.burn(amount, targetContractIndex).
External contributions: Anyone can send funds to the QUtil contract's BurnQubicForContract procedure, specifying which contract should receive the reserve boost.
Legacy method: QUtil's BurnQubic procedure adds specifically to QUtil's own reserve.
These mechanisms tie directly into Qubic's tokenomics, where burning serves as the core deflationary mechanism rather than traditional transaction fees.
Protection for Users
The system includes built-in safeguards. If you send a transaction to a contract with depleted reserves, any attached funds are automatically returned. You won’t lose money because a contract failed to maintain its balance.
Read-only queries remain available even for dormant contracts. You can check its state at any time, but state-changing procedures won’t run until the reserve is replenished.
What This Means for Qubic
This update marks a meaningful shift in how Qubic handles smart contract economics. 
Contracts that perform more work, pay more. Efficient code becomes genuinely valuable. And the network gains a sustainable mechanism for burning tokens tied to actual utility rather than arbitrary fixed amounts.
If you build on Qubic and haven't yet reviewed your contracts under this new model, now is the time. For technical details, see the full reference documentation on GitHub.
Join the Qubic Discord or Telegram community to ask questions, share ideas, and discuss implementation strategies with other builders.or Telegram community to ask questions, share ideas, and discuss implementation strategies with other builders.
#Qubic #SmartContracts
Visualizza traduzione
Qubic's 2026 Vision: Building the Future of Decentralized AIThe end-of-year AMA provided a comprehensive look at 2025's progress and the roadmap ahead for 2026. The message was clear: building decentralized AI infrastructure for the long haul, not chasing market hype. 2025: Building before Scaling Qubic hit serious technical milestones in 2025. The network now runs at a two-second tick speed with 2TB of memory support. This foundation is what enables Qubic to support large-scale computation, advanced AI models, and infrastructure-level applications. Custom mining integration with external projects like Monero proved that Qubic can work as a universal compute layer., advanced AI models, and infrastructure-level applications. Custom mining integration with external projects like Monero proved that Qubic can work as a universal compute layer. On the developer side, multiple SDKs came online along with automated smart contract validation. The governance model matured too. Computors approved the first token halving, and voting mechanisms improved across the board. Certik certified Qubic at 15.5 million TPS, positioning the network among the fastest blockchain infrastructures globally. This certification validates the technical foundation and opens doors for partnerships that require proven performance metrics. The Madrid hackathon brought in 120 hackers across 27 teams, with €80,000 in prizes funded through partnerships with Telefonica and the Madrid government. This level of developer engagement doesn't happen by accident. The momentum continued with RaiseHack 2025 in Paris, part of Europe's leading AI conference. The Qubic Track attracted 400 developers out of 6,000 total participants, with 22 teams advancing to finals at Le Carrousel du Louvre. Most recently, the "Hack the Future" hackathon drew 1,654 participants across 265 teams, resulting in 102 project submissions spanning smart contract development and no-code applications through EasyConnect integrations. Beyond hackathons, the team made its presence felt at Token 2049 Singapore. With 25,000+ attendees, the event generated over 50 partnership leads and six active integrations. The workshop, upgraded to the main TON Stage, reaching hundreds of attendees. These efforts led directly to valuable collaborations, including Avicenne Studio, later winning the RFP for the Solana Bridge. The Science Behind The Vision David Vivancos and Dr. José Sanchez pushed forward on the AI research front. Two major papers were published: a theoretical AGI position paper that's been read over 16,000 times, and Neuraxon, a practical AI model already seeing traction with 2,500 reads and 129 code clones. Unlike static language models, Neuraxon is designed as an evolving AI system rather than a fixed snapshot of intelligence. Integration into the Qubic network by spring 2026 will create what the team calls a "living AI system" that evolves over time. Traditional peer review processes slowed things down initially, leading to a pivot toward building practical models that demonstrate real progress. Marketing That Moves Numbers Since October, the marketing push has generated over 10 million ad impressions. CRM contacts jumped 730% from 536 to 4,451. Live stream AMAs crossed 100,000 views after switching to Streamyard for better distribution. The paid analytics tell the story: 1300% performance increase on a modest $7,042 ad spend, with engagement metrics up 5656%. DeFiMomma, the marketing lead, emphasized building accountable systems before scaling further. No chaotic growth sprints, just measured execution. For 2026, the positioning shifts and expands to establish Qubic as the most credible AI compute network for miners, computors, and developers. The brand identity will emphasize science, compute, and mathematical integrity. Global PR replaces short-term partnership hype. The target audience? Institutions that need to understand why decentralized AI infrastructure matters. Ecosystem Reality Check Alber, who has been leading ecosystem development, was refreshingly direct about what worked and what didn't. Some partnerships took longer than expected. Exchange integrations proved to be more complicated than anticipated, simply because Qubic's architecture differs from standard chains. External dependencies created delays. The approach evolved to manage expectations better. A "fail fast, build fast" philosophy now guides incubation projects. Early MVP launches will replace long development cycles before community engagement. The focus areas for new projects are: interoperability bridges, stablecoins, and perpetual DEXs that leverage Qubic's speed advantage. The Solana Bridge, being built by Avicenne studio after winning the RFP, should launch around May or June 2026. Alber confirmed he's stepping back from the public ecosystem lead role, though he'll continue supporting Qubic as a whole. The AI teams in the ecosystem are now self-sustaining. What's Coming in 2026 The technical roadmap includes several key upgrades. Seamless updates will allow core network changes without downtime, which matters for partner exchanges. The mining algorithm continues evolving to support ongoing research. By year's end, the network transitions from AVX2 to AVX1212 instruction standards. The Qubic Network Guardians program just launched to incentivize running light nodes through gamification and leaderboards. Making network participation accessible to more people strengthens decentralization. Planning cycles shift to three-month time boxes with community-driven feature prioritization. The transparency should help ecosystem builders plan their own development timelines. Community Culture Shift 2025 brought price volatility that tested the community. El Clip, the community workgroup lead, described a reshaping of identity. Moderation improved. The focus moved toward constructive criticism over reactive conflict. The community is developing shared norms rather than top-down rules. Early intervention on disruptive behavior helps maintain productive discussions. Long-term contributors get recognition, which reduces friction. The expectation for 2026? Consolidate this new identity. Open participation continues, but the culture rewards substance over speculation. Short-term thinking gets left behind. The Core Philosophy Throughout the AMA, one theme kept surfacing: Qubic exists to build decentralized AI infrastructure that solves complex problems. The token facilitates the economy, but the real value lives in the technology itself. Alber framed it directly: Even without the token, Qubic enables powerful outsourced computation and AI development. That's the foundation everything else builds on. The reality is that AI advances so rapidly that rigid plans become obsolete. Flexibility matters. Iterative development matters. Continuous adaptation to new research matters. The next three to five years aim to create an AI economy with interconnected agents and high-speed crypto applications. Qubic positions itself as the compute network where users actually want to deploy their workloads. Looking Forward The AMA focused on substance over speculation. The team laid out technical milestones, acknowledged where mistakes were made, and outlined concrete plans for 2026. The scientific research continues pushing boundaries. The developer ecosystem keeps growing. The marketing strategy targets credibility and consistency over short term hype. The community matures into something sustainable. The goal is clear: become the most powerful decentralized AI compute network. The 2025 foundation is solid, and the 2026 roadmap focuses on execution and advancements. The pieces are moving into place. Now, comes the hard part: delivering on all of it. 🌐 https://qubic.org #Qubic

Qubic's 2026 Vision: Building the Future of Decentralized AI

The end-of-year AMA provided a comprehensive look at 2025's progress and the roadmap ahead for 2026. The message was clear: building decentralized AI infrastructure for the long haul, not chasing market hype.
2025: Building before Scaling
Qubic hit serious technical milestones in 2025. The network now runs at a two-second tick speed with 2TB of memory support. This foundation is what enables Qubic to support large-scale computation, advanced AI models, and infrastructure-level applications. Custom mining integration with external projects like Monero proved that Qubic can work as a universal compute layer., advanced AI models, and infrastructure-level applications. Custom mining integration with external projects like Monero proved that Qubic can work as a universal compute layer.
On the developer side, multiple SDKs came online along with automated smart contract validation. The governance model matured too. Computors approved the first token halving, and voting mechanisms improved across the board.
Certik certified Qubic at 15.5 million TPS, positioning the network among the fastest blockchain infrastructures globally. This certification validates the technical foundation and opens doors for partnerships that require proven performance metrics.
The Madrid hackathon brought in 120 hackers across 27 teams, with €80,000 in prizes funded through partnerships with Telefonica and the Madrid government. This level of developer engagement doesn't happen by accident.
The momentum continued with RaiseHack 2025 in Paris, part of Europe's leading AI conference. The Qubic Track attracted 400 developers out of 6,000 total participants, with 22 teams advancing to finals at Le Carrousel du Louvre. Most recently, the "Hack the Future" hackathon drew 1,654 participants across 265 teams, resulting in 102 project submissions spanning smart contract development and no-code applications through EasyConnect integrations.
Beyond hackathons, the team made its presence felt at Token 2049 Singapore. With 25,000+ attendees, the event generated over 50 partnership leads and six active integrations. The workshop, upgraded to the main TON Stage, reaching hundreds of attendees. These efforts led directly to valuable collaborations, including Avicenne Studio, later winning the RFP for the Solana Bridge.
The Science Behind The Vision
David Vivancos and Dr. José Sanchez pushed forward on the AI research front. Two major papers were published: a theoretical AGI position paper that's been read over 16,000 times, and Neuraxon, a practical AI model already seeing traction with 2,500 reads and 129 code clones.
Unlike static language models, Neuraxon is designed as an evolving AI system rather than a fixed snapshot of intelligence. Integration into the Qubic network by spring 2026 will create what the team calls a "living AI system" that evolves over time. Traditional peer review processes slowed things down initially, leading to a pivot toward building practical models that demonstrate real progress.
Marketing That Moves Numbers
Since October, the marketing push has generated over 10 million ad impressions. CRM contacts jumped 730% from 536 to 4,451. Live stream AMAs crossed 100,000 views after switching to Streamyard for better distribution.
The paid analytics tell the story: 1300% performance increase on a modest $7,042 ad spend, with engagement metrics up 5656%. DeFiMomma, the marketing lead, emphasized building accountable systems before scaling further. No chaotic growth sprints, just measured execution.
For 2026, the positioning shifts and expands to establish Qubic as the most credible AI compute network for miners, computors, and developers. The brand identity will emphasize science, compute, and mathematical integrity. Global PR replaces short-term partnership hype. The target audience? Institutions that need to understand why decentralized AI infrastructure matters.
Ecosystem Reality Check
Alber, who has been leading ecosystem development, was refreshingly direct about what worked and what didn't. Some partnerships took longer than expected. Exchange integrations proved to be more complicated than anticipated, simply because Qubic's architecture differs from standard chains. External dependencies created delays.
The approach evolved to manage expectations better. A "fail fast, build fast" philosophy now guides incubation projects. Early MVP launches will replace long development cycles before community engagement. The focus areas for new projects are: interoperability bridges, stablecoins, and perpetual DEXs that leverage Qubic's speed advantage.
The Solana Bridge, being built by Avicenne studio after winning the RFP, should launch around May or June 2026. Alber confirmed he's stepping back from the public ecosystem lead role, though he'll continue supporting Qubic as a whole. The AI teams in the ecosystem are now self-sustaining.
What's Coming in 2026
The technical roadmap includes several key upgrades. Seamless updates will allow core network changes without downtime, which matters for partner exchanges. The mining algorithm continues evolving to support ongoing research. By year's end, the network transitions from AVX2 to AVX1212 instruction standards.
The Qubic Network Guardians program just launched to incentivize running light nodes through gamification and leaderboards. Making network participation accessible to more people strengthens decentralization.
Planning cycles shift to three-month time boxes with community-driven feature prioritization. The transparency should help ecosystem builders plan their own development timelines.
Community Culture Shift
2025 brought price volatility that tested the community. El Clip, the community workgroup lead, described a reshaping of identity. Moderation improved. The focus moved toward constructive criticism over reactive conflict.
The community is developing shared norms rather than top-down rules. Early intervention on disruptive behavior helps maintain productive discussions. Long-term contributors get recognition, which reduces friction.
The expectation for 2026? Consolidate this new identity. Open participation continues, but the culture rewards substance over speculation. Short-term thinking gets left behind.
The Core Philosophy
Throughout the AMA, one theme kept surfacing: Qubic exists to build decentralized AI infrastructure that solves complex problems. The token facilitates the economy, but the real value lives in the technology itself.
Alber framed it directly:
Even without the token, Qubic enables powerful outsourced computation and AI development. That's the foundation everything else builds on.
The reality is that AI advances so rapidly that rigid plans become obsolete. Flexibility matters. Iterative development matters. Continuous adaptation to new research matters.
The next three to five years aim to create an AI economy with interconnected agents and high-speed crypto applications. Qubic positions itself as the compute network where users actually want to deploy their workloads.
Looking Forward
The AMA focused on substance over speculation. The team laid out technical milestones, acknowledged where mistakes were made, and outlined concrete plans for 2026.
The scientific research continues pushing boundaries. The developer ecosystem keeps growing. The marketing strategy targets credibility and consistency over short term hype. The community matures into something sustainable.
The goal is clear: become the most powerful decentralized AI compute network. The 2025 foundation is solid, and the 2026 roadmap focuses on execution and advancements.
The pieces are moving into place. Now, comes the hard part: delivering on all of it.
🌐 https://qubic.org #Qubic
Visualizza traduzione
If AI eats software, $QUBIC powers the AI. 🧠 Traditional AI is hitting an energy wall. Qubic solves this with Neuraxon—a decentralized AI using Trinary logic (-1,0,1) to mimic the human brain's 20W efficiency. Smarter units > Bigger models. ⚡️🚀 #Qubic #AI #uPoW
If AI eats software, $QUBIC powers the AI. 🧠 Traditional AI is hitting an energy wall. Qubic solves this with Neuraxon—a decentralized AI using Trinary logic (-1,0,1) to mimic the human brain's 20W efficiency. Smarter units > Bigger models. ⚡️🚀 #Qubic #AI #uPoW
CZ
·
--
Il software divora il mondo. L'IA divora il software. 😂
AI Ispirata alla Biologia: Come la Neuromodulazione Trasforma le Reti Neurali ProfondeAnalisi dell'informazione delle reti neurali profonde tramite principi multiscala Nel cervello, la neuromodulazione è l'insieme di meccanismi attraverso i quali alcuni neurotrasmettitori modificano le proprietà funzionali dei neuroni e delle sinapsi, alterando come rispondono, per quanto tempo integrano le informazioni e in quali condizioni cambiano con l'esperienza. Questi effetti sono prodotti principalmente attraverso neurotrasmettitori come dopamina, serotonina, noradrenalina e acetilcolina, che agiscono su recettori noti come recettori metabotropici. A differenza dei recettori veloci, questi non generano direttamente un segnale elettrico, ma attivano invece percorsi di segnalazione cellulare che modificano il regime dinamico del neurone e del circuito.

AI Ispirata alla Biologia: Come la Neuromodulazione Trasforma le Reti Neurali Profonde

Analisi dell'informazione delle reti neurali profonde tramite principi multiscala
Nel cervello, la neuromodulazione è l'insieme di meccanismi attraverso i quali alcuni neurotrasmettitori modificano le proprietà funzionali dei neuroni e delle sinapsi, alterando come rispondono, per quanto tempo integrano le informazioni e in quali condizioni cambiano con l'esperienza.
Questi effetti sono prodotti principalmente attraverso neurotrasmettitori come dopamina, serotonina, noradrenalina e acetilcolina, che agiscono su recettori noti come recettori metabotropici. A differenza dei recettori veloci, questi non generano direttamente un segnale elettrico, ma attivano invece percorsi di segnalazione cellulare che modificano il regime dinamico del neurone e del circuito.
Visualizza traduzione
Neuraxon Time: Why Intelligence Is Not Computed in Steps, but in TimeWritten by Qubic Scientific Team How does a neuron function over time? Biological neurons do not function like a bedroom light switch being turned on. They are a continuous dynamic system. The neuronal state evolves constantly, even in the absence of external stimuli. How does a neuron function over time? Basically, by moving electrical charges (ions) in or out of its membrane, that is, by changing its electrical potential. Ions enter or leave (mainly sodium and potassium) through the different gates of the neuron with a certain intensity, modifying the potential. There are some gates, called leakage gates, where ions are always entering and leaving. Time is implicit. The electrical potential changes constantly, over time. The change in a neuron’s electrical potential over time depends on: The external current applied + the balance between the flows of sodium ions (which increase it) and potassium ions (which decrease it) through the gates that open and close. Don’t panic with the graph. Positive and negative electrical charges (ions) flow through the gates causing depolarization (so current moves along to the end of the neuron) or hiperpolarization (so it comes back to a neutral state).  The potential (V) changes over time, that is mathematically, dV/dt, as a function of the sum of the input and output gates. This is the fundamental model of computational neuroscience, which expresses that the state of the neuron depends both on current signals and on its immediate history. There is no “reset” between events, since each stimulus falls onto a system that is always running. Now let’s move to Neuraxon, which is a bio-inspired model. We want it to be alive, an intelligent tissue. It cannot have discrete states, but continuous ones. In Neuraxon, instead of ion gates that open and close and move charges with a certain intensity, changing voltage, we have dynamic synaptic weights. But the model equation maintains a clear and direct similarity with the biological neuron. What does this mean? Instead of V, voltage in biological neuron, the state of Neuraxon, is s. And it changes over time too, therefore ds/dt is a function of the weights and activations and the previous state. Unlike a classical AI model, where the synaptic weights of a network represent stereotyped outputs to an input, in Neuraxon the weights are not static. Imagine, for example, an “email inbox” automatic response mechanism. In classical AI, the rule does not adjust or change over time or context. In Neuraxon, it is taken into account whether the “email input” comes from the same person (which could indicate urgency) or whether it arrives on a weekend (which may generate a no-response output). In other words, the rule remains, but when and how the response is given is modulated. Do LLMs compute time? Large language models appear to show deep understanding in many contexts, but they operate under a logic different from biological systems (Vaswany, 2017). They do not function based on an internal temporal dynamic, on a “change in potential” or on “synaptic weights” that modulate response, but rather process discrete sequences. In LLMs, “time” does not exist, which makes it difficult for them to simulate biological behavior (such as intelligence). LLMs know how to distinguish which word comes before and which comes after, but they do not grant an experience of duration or persistence. Order replaces time. Unlike Neuraxon, they do not possess internal rhythms that speed up or slow down, nor do they show progressive habituation to repeated stimuli, nor can they dynamically anticipate based on an internal state that changes over time. The LLM model computation would be something like: output = Fθ(input) so outcomes are fixed solutions from a function (combination) of inputs. There is no state as a function of time. These are data that form huge matrices and change their value through a specific function, which, as in the example cited, restricts the possibilities: email input → automatic response. Wrapping up. The distance between bio-inspired models such as Neuraxon and large language models should not be explained in terms of computational power or data volume. There is a deeper difference. The brain is, in itself, a continuous temporal system. Its functioning is defined by dynamics that unfold over time, by states that evolve, decay, and reorganize permanently, even in the absence of external stimuli (Deco et al., 2009; Northoff, 2018). Neuraxon deliberately positions itself within that same logic. It does not attempt to imitate 1 to 1 the biophysical complexity of the brain, but it explicitly incorporates time as a computational variable. Its internal state evolves continuously, carries the past, and modulates the present, allowing adaptation without the need for a reset. LLMs, by contrast, operate very differently. They manipulate symbols ordered in discrete sequences without their own temporal dynamics. There is no time, only order. There is no adaptation, only pre-defined responses. As long as time does not form part of the state governing computation, LLMs may be effective, but they will hardly be autonomous in a strong sense. Future artificial intelligence aims to operate in dynamic environments. This is the reason why Neuraxon includes time as a fundamental variable. A living intelligence tissue… How This Relates back to Qubic? Qubic provides the continuously running, stateful computational environment required for time-aware intelligence. It is the natural substrate on which models like Neuraxon - adaptive, persistent, and never “resetting” - can exist and evolve. Addenda Take a look at the equations. Don´t panic! 1 Biological neuron, V potencial, “sum of gates flux in & out” 2 Neuraxon model equation - clear and direct similarity with the biological neuron. s state, wi & f(si) dynamic synaptic weights  3 LLM model equation. Inputs (ordered in a matrix) create matrix outputs through a fixed function  p (xn+1 | x₁, …, xn) = softmax (Fθ (x₁, …, xn) ) References Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., & Friston, K. J. (2009). The dynamic brain. PLoS Computational Biology, 5(8), e1000092.Northoff, G. (2018). The spontaneous brain. MIT Press.Vaswani, A., et al. (2017). Attention is all you need. NeurIPS.Vivancos, D., & Sanchez, J. (2025). Neuraxon: A new neural growth & computation blueprint. Qubic Science.rint. Qubic Science. #Qubic #Neuraxon

Neuraxon Time: Why Intelligence Is Not Computed in Steps, but in Time

Written by Qubic Scientific Team

How does a neuron function over time?
Biological neurons do not function like a bedroom light switch being turned on. They are a continuous dynamic system. The neuronal state evolves constantly, even in the absence of external stimuli.
How does a neuron function over time?
Basically, by moving electrical charges (ions) in or out of its membrane, that is, by changing its electrical potential. Ions enter or leave (mainly sodium and potassium) through the different gates of the neuron with a certain intensity, modifying the potential. There are some gates, called leakage gates, where ions are always entering and leaving.
Time is implicit. The electrical potential changes constantly, over time.
The change in a neuron’s electrical potential over time depends on:
The external current applied + the balance between the flows of sodium ions (which increase it) and potassium ions (which decrease it) through the gates that open and close.
Don’t panic with the graph. Positive and negative electrical charges (ions) flow through the gates causing depolarization (so current moves along to the end of the neuron) or hiperpolarization (so it comes back to a neutral state). 

The potential (V) changes over time, that is mathematically, dV/dt, as a function of the sum of the input and output gates.
This is the fundamental model of computational neuroscience, which expresses that the state of the neuron depends both on current signals and on its immediate history. There is no “reset” between events, since each stimulus falls onto a system that is always running.
Now let’s move to Neuraxon, which is a bio-inspired model.

We want it to be alive, an intelligent tissue. It cannot have discrete states, but continuous ones.
In Neuraxon, instead of ion gates that open and close and move charges with a certain intensity, changing voltage, we have dynamic synaptic weights. But the model equation maintains a clear and direct similarity with the biological neuron.
What does this mean?
Instead of V, voltage in biological neuron, the state of Neuraxon, is s. And it changes over time too, therefore ds/dt is a function of the weights and activations and the previous state.
Unlike a classical AI model, where the synaptic weights of a network represent stereotyped outputs to an input, in Neuraxon the weights are not static.
Imagine, for example, an “email inbox” automatic response mechanism.
In classical AI, the rule does not adjust or change over time or context.
In Neuraxon, it is taken into account whether the “email input” comes from the same person (which could indicate urgency) or whether it arrives on a weekend (which may generate a no-response output). In other words, the rule remains, but when and how the response is given is modulated.
Do LLMs compute time?

Large language models appear to show deep understanding in many contexts, but they operate under a logic different from biological systems (Vaswany, 2017). They do not function based on an internal temporal dynamic, on a “change in potential” or on “synaptic weights” that modulate response, but rather process discrete sequences.
In LLMs, “time” does not exist, which makes it difficult for them to simulate biological behavior (such as intelligence). LLMs know how to distinguish which word comes before and which comes after, but they do not grant an experience of duration or persistence. Order replaces time.
Unlike Neuraxon, they do not possess internal rhythms that speed up or slow down, nor do they show progressive habituation to repeated stimuli, nor can they dynamically anticipate based on an internal state that changes over time.
The LLM model computation would be something like:
output = Fθ(input)
so outcomes are fixed solutions from a function (combination) of inputs.
There is no state as a function of time. These are data that form huge matrices and change their value through a specific function, which, as in the example cited, restricts the possibilities: email input → automatic response.
Wrapping up. The distance between bio-inspired models such as Neuraxon and large language models should not be explained in terms of computational power or data volume. There is a deeper difference.
The brain is, in itself, a continuous temporal system. Its functioning is defined by dynamics that unfold over time, by states that evolve, decay, and reorganize permanently, even in the absence of external stimuli (Deco et al., 2009; Northoff, 2018).
Neuraxon deliberately positions itself within that same logic. It does not attempt to imitate 1 to 1 the biophysical complexity of the brain, but it explicitly incorporates time as a computational variable. Its internal state evolves continuously, carries the past, and modulates the present, allowing adaptation without the need for a reset.
LLMs, by contrast, operate very differently. They manipulate symbols ordered in discrete sequences without their own temporal dynamics. There is no time, only order. There is no adaptation, only pre-defined responses.
As long as time does not form part of the state governing computation, LLMs may be effective, but they will hardly be autonomous in a strong sense.
Future artificial intelligence aims to operate in dynamic environments. This is the reason why Neuraxon includes time as a fundamental variable.
A living intelligence tissue…
How This Relates back to Qubic?
Qubic provides the continuously running, stateful computational environment required for time-aware intelligence.
It is the natural substrate on which models like Neuraxon - adaptive, persistent, and never “resetting” - can exist and evolve.
Addenda
Take a look at the equations. Don´t panic!
1 Biological neuron, V potencial, “sum of gates flux in & out”

2 Neuraxon model equation - clear and direct similarity with the biological neuron.
s state, wi & f(si) dynamic synaptic weights 

3 LLM model equation. Inputs (ordered in a matrix) create matrix outputs through a fixed function 
p (xn+1 | x₁, …, xn) = softmax (Fθ (x₁, …, xn) )

References
Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., & Friston, K. J. (2009). The dynamic brain. PLoS Computational Biology, 5(8), e1000092.Northoff, G. (2018). The spontaneous brain. MIT Press.Vaswani, A., et al. (2017). Attention is all you need. NeurIPS.Vivancos, D., & Sanchez, J. (2025). Neuraxon: A new neural growth & computation blueprint. Qubic Science.rint. Qubic Science.
#Qubic #Neuraxon
Neuromodulazione: Cosa Fa il Cervello, Cosa Non Fanno i Trasformatori e Cosa Cerca di Fare NeuraxonScritto dal Team Scientifico di Qubic Neuraxon Intelligence Academy — Volume 3 1. Neuromodulazione nel Cervello: La Fondazione dell'Intelligenza Adattiva La neuromodulazione si riferisce all'insieme di meccanismi che regolano come il sistema nervoso funziona in un dato momento, senza cambiare la sua architettura di base. Grazie alla neuromodulazione, il cervello può apprendere rapidamente o lentamente, diventare esplorativo o conservativo e rimanere aperto alla novità o concentrarsi su ciò che è già noto. Il cablaggio non cambia; ciò che cambia è il modo in cui quel cablaggio viene utilizzato. Questo concetto è centrale per comprendere l'IA ispirata al cervello e l'architettura dietro Neuraxon di Qubic.

Neuromodulazione: Cosa Fa il Cervello, Cosa Non Fanno i Trasformatori e Cosa Cerca di Fare Neuraxon

Scritto dal Team Scientifico di Qubic
Neuraxon Intelligence Academy — Volume 3

1. Neuromodulazione nel Cervello: La Fondazione dell'Intelligenza Adattiva
La neuromodulazione si riferisce all'insieme di meccanismi che regolano come il sistema nervoso funziona in un dato momento, senza cambiare la sua architettura di base. Grazie alla neuromodulazione, il cervello può apprendere rapidamente o lentamente, diventare esplorativo o conservativo e rimanere aperto alla novità o concentrarsi su ciò che è già noto. Il cablaggio non cambia; ciò che cambia è il modo in cui quel cablaggio viene utilizzato. Questo concetto è centrale per comprendere l'IA ispirata al cervello e l'architettura dietro Neuraxon di Qubic.
Visualizza traduzione
Beyond Binary: Ternary Dynamics as a Model of Living IntelligenceWritten by Qubic Scientific Team The brain is dynamic and non-binary Biological brain networks do not operate as a decision switch between activation and rest. In living systems, inactivity itself implies dynamism. Absolute “rest” would be incompatible with life. As we saw in the first chapter, life unfolds in time. An individual neuron may appear as an all-or-nothing event, transmitting electrical current to another neuron in order to inhibit or excite it. However, prior to that transmission, the action potential, the neuron continuously receives positive and negative inputs in a region called the dendrites. If the global sum of these inputs exceeds a certain threshold, a physical conformational change occurs, and the electrical current propagates along the axon toward the next neuron. For most of the time, neuronal processing takes place below the action threshold, where excitatory and inhibitory currents are continuously integrated.  In computational neuroscience, it is well established that the brain is a continuous dynamic system whose states evolve even in the absence of external stimuli (Deco et al., 2009; Northoff, 2018). There are no discrete events or resets in the brain. Each external stimulus acts upon a living system that already has a prior configuration. A stimulus may bias an excitatory or inhibitory state, but never a static one. It is like a ball on a football field: the same trajectory triggers different outcomes depending on the dynamic positions of the players. With an identical path, the play may fail or become a decisive assist. The mechanisms that keep neurons active independently of immediate stimuli are well known. One of them consists of subthreshold inputs, which alter the membrane potential without generating an action potential.  Others include silent synapses and dendritic spines, which preserve latent connectivity between neurons or promote local activation.  The most important mechanism involves metabotropic receptors linked to neurotransmitters, which organize context. They don't directly determine whether an action potential is triggered. Instead, they define what is relevant or not, what reward prediction a stimulus carries, what level of alert or danger is present, how much novelty exists in the system, what degree of sustained attention is required, what balance between exploration and exploitation is appropriate, what should be encoded versus forgotten, how the internal state is regulated, and when impulse control or temporal stability is advantageous. In other words, metabotropic receptors implement a form of wise metacontrol. They are not data, but parameters! They function as dynamic variables that adjust system behavior. They allow the system to become sensitive to the functional meaning of a situation (novelty, relevance, reward, or threat) without requiring immediate responses.  Returning to the football metaphor, metabotropic receptors correspond to team tactics: deciding when to attack or defend, that is, deciding how the game is played. From a computational perspective, these mechanisms operate through intermediate states. They are not binary (active/inactive). The system operates in three modes: excitatory, inhibitory, and an intermediate state that produces no immediate output but modulates future dynamics. When we speak of ternary in biological brain networks, we are not referring to a mathematical abstraction or calculus but to a literal functional description of how the brain maintains balance over time. For this reason, computational neuroscience does not primarily study input–output mappings, but rather how states reorganize continuously. These states are fundamentally predictive in nature (Friston, 2010; Deco et al., 2009). LLMs are binary computations. In large language models, the concept of ternarity does not make sense. Learning is fundamentally based on error backpropagation. That is, once the magnitude of the error relative to the expected data is known, an optimization algorithm adjusts parameters using an external signal. How does this work? The model produces an output, for example the prediction of the most likely next word: “Paris is the capital of …”. If the response is Finland, this is compared with the correct word from the training set (France). From this comparison, a numerical error is computed. This error quantifies how far the prediction deviates from the expected value. The error is then transformed into a gradient, namely a mathematical signal that indicates in which direction and by how much the model’s parameters should be adjusted to reduce the error. The weights are updated backward only after the output has been produced and evaluated. The error is computed a posteriori, the weights are adjusted so that the correct response becomes France, and the system resumes operation as if nothing had happened. In large language models, the separation between dynamics and learning is especially pronounced. During inference, parameters remain fixed; there is no online plasticity, no habituation, no fatigue, and no time-dependent adaptation. The system does not change by being active. In the football metaphor, LLMs resemble a coach who reviews mistakes after the match and adjusts tactics for the next one. But during the match itself, the team plays the full ninety minutes without any possibility of technical or tactical modification!  There is pre-match strategy and post-match correction, but no dynamism during play!  LLMs are therefore not ternary in a functional sense. They are matrices of “attention” (transformers) trained offline (Vaswani et al., 2017). This is not a quantitative limitation but an ontological difference. Neuraxon and Aigarth trinary dynamics Neuraxon introduces a fundamentally different framework. Its basic unit is not an input–output function, as in LLMs, but an internal continuous state that evolves over time. In Neuraxon, excitation is represented as +1, inhibition as −1, and between these two states there exists a neutral range represented by 0. At each moment, the system integrates the influence of current inputs, recent history, and internal mechanisms in order to generate a discrete trinomial output (excitation, inhibition, or neutrality). The relationship between time and ternary is central. The neutral state does not represent the absence of computation or inactivity but a subthreshold phase in which the system accumulates influence without producing immediate output. It is comparable to a dynamic tactical shift in a football team, regardless of whether it leads to a goal for or against. Aigarth expresses the same logic at a structural level. Not only are the units themselves ternary, but the network can grow, reorganize, or collapse depending on its utility, introducing an evolutionary dimension that reinforces continuous adaptation. The Neuraxon–Aigarth combination (micro–macro) gives rise to computational tissues capable of remaining active (intelligence tissue units), something impossible for architectures based exclusively on backpropagation. The hardware question cannot be ignored. At present, there is no general-purpose ternary hardware, but there are active research lines in ternary logic, including multivalued memristors and neuromorphic computation based on resistive or spintronic devices (Yang et al., 2013; Indiveri & Liu, 2015). These approaches aim to reduce energy consumption and, more importantly, to achieve ternary computation aligned with physical, living, and continuous dynamics. Does a ternary architecture make sense even without dedicated ternary hardware? Despite this limitation, it does, because architecture precedes physical substrate. By designing ternary systems, we reveal the inability of binary logic to reflect a dynamic world. At the same time, ternary architectures such as Neuraxon–Aigarth can already yield improvements on existing binary hardware by reducing unnecessary activity. References Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., & Friston, K. J. (2009). The dynamic brain: From spiking neurons to neural masses and cortical fields. PLoS Computational Biology, 5(8), e1000092. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. Indiveri, G., & Liu, S.-C. (2015). Memory and information processing in neuromorphic systems. Proceedings of the IEEE, 103(8), 1379–1397. Northoff, G. (2018). The spontaneous brain: From the mind–body problem to a neurophenomenology. MIT Press. Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. Yang, J. J., Strukov, D. B., & Stewart, D. R. (2013). Memristive devices for computing. Nature Nanotechnology, 8(1), 13–24. #aigarth #trinary

Beyond Binary: Ternary Dynamics as a Model of Living Intelligence

Written by Qubic Scientific Team

The brain is dynamic and non-binary
Biological brain networks do not operate as a decision switch between activation and rest. In living systems, inactivity itself implies dynamism. Absolute “rest” would be incompatible with life. As we saw in the first chapter, life unfolds in time.
An individual neuron may appear as an all-or-nothing event, transmitting electrical current to another neuron in order to inhibit or excite it. However, prior to that transmission, the action potential, the neuron continuously receives positive and negative inputs in a region called the dendrites. If the global sum of these inputs exceeds a certain threshold, a physical conformational change occurs, and the electrical current propagates along the axon toward the next neuron. For most of the time, neuronal processing takes place below the action threshold, where excitatory and inhibitory currents are continuously integrated. 
In computational neuroscience, it is well established that the brain is a continuous dynamic system whose states evolve even in the absence of external stimuli (Deco et al., 2009; Northoff, 2018).
There are no discrete events or resets in the brain. Each external stimulus acts upon a living system that already has a prior configuration. A stimulus may bias an excitatory or inhibitory state, but never a static one. It is like a ball on a football field: the same trajectory triggers different outcomes depending on the dynamic positions of the players. With an identical path, the play may fail or become a decisive assist.
The mechanisms that keep neurons active independently of immediate stimuli are well known.
One of them consists of subthreshold inputs, which alter the membrane potential without generating an action potential. 
Others include silent synapses and dendritic spines, which preserve latent connectivity between neurons or promote local activation. 
The most important mechanism involves metabotropic receptors linked to neurotransmitters, which organize context. They don't directly determine whether an action potential is triggered. Instead, they define what is relevant or not, what reward prediction a stimulus carries, what level of alert or danger is present, how much novelty exists in the system, what degree of sustained attention is required, what balance between exploration and exploitation is appropriate, what should be encoded versus forgotten, how the internal state is regulated, and when impulse control or temporal stability is advantageous.
In other words, metabotropic receptors implement a form of wise metacontrol. They are not data, but parameters! They function as dynamic variables that adjust system behavior. They allow the system to become sensitive to the functional meaning of a situation (novelty, relevance, reward, or threat) without requiring immediate responses. 
Returning to the football metaphor, metabotropic receptors correspond to team tactics: deciding when to attack or defend, that is, deciding how the game is played.
From a computational perspective, these mechanisms operate through intermediate states. They are not binary (active/inactive). The system operates in three modes: excitatory, inhibitory, and an intermediate state that produces no immediate output but modulates future dynamics.
When we speak of ternary in biological brain networks, we are not referring to a mathematical abstraction or calculus but to a literal functional description of how the brain maintains balance over time.
For this reason, computational neuroscience does not primarily study input–output mappings, but rather how states reorganize continuously. These states are fundamentally predictive in nature (Friston, 2010; Deco et al., 2009).
LLMs are binary computations.
In large language models, the concept of ternarity does not make sense. Learning is fundamentally based on error backpropagation. That is, once the magnitude of the error relative to the expected data is known, an optimization algorithm adjusts parameters using an external signal.
How does this work? The model produces an output, for example the prediction of the most likely next word: “Paris is the capital of …”. If the response is Finland, this is compared with the correct word from the training set (France). From this comparison, a numerical error is computed. This error quantifies how far the prediction deviates from the expected value. The error is then transformed into a gradient, namely a mathematical signal that indicates in which direction and by how much the model’s parameters should be adjusted to reduce the error. The weights are updated backward only after the output has been produced and evaluated.
The error is computed a posteriori, the weights are adjusted so that the correct response becomes France, and the system resumes operation as if nothing had happened.
In large language models, the separation between dynamics and learning is especially pronounced. During inference, parameters remain fixed; there is no online plasticity, no habituation, no fatigue, and no time-dependent adaptation. The system does not change by being active.
In the football metaphor, LLMs resemble a coach who reviews mistakes after the match and adjusts tactics for the next one. But during the match itself, the team plays the full ninety minutes without any possibility of technical or tactical modification! 
There is pre-match strategy and post-match correction, but no dynamism during play! 
LLMs are therefore not ternary in a functional sense. They are matrices of “attention” (transformers) trained offline (Vaswani et al., 2017). This is not a quantitative limitation but an ontological difference.
Neuraxon and Aigarth trinary dynamics
Neuraxon introduces a fundamentally different framework. Its basic unit is not an input–output function, as in LLMs, but an internal continuous state that evolves over time. In Neuraxon, excitation is represented as +1, inhibition as −1, and between these two states there exists a neutral range represented by 0.
At each moment, the system integrates the influence of current inputs, recent history, and internal mechanisms in order to generate a discrete trinomial output (excitation, inhibition, or neutrality).
The relationship between time and ternary is central. The neutral state does not represent the absence of computation or inactivity but a subthreshold phase in which the system accumulates influence without producing immediate output. It is comparable to a dynamic tactical shift in a football team, regardless of whether it leads to a goal for or against.
Aigarth expresses the same logic at a structural level. Not only are the units themselves ternary, but the network can grow, reorganize, or collapse depending on its utility, introducing an evolutionary dimension that reinforces continuous adaptation. The Neuraxon–Aigarth combination (micro–macro) gives rise to computational tissues capable of remaining active (intelligence tissue units), something impossible for architectures based exclusively on backpropagation.

The hardware question cannot be ignored. At present, there is no general-purpose ternary hardware, but there are active research lines in ternary logic, including multivalued memristors and neuromorphic computation based on resistive or spintronic devices (Yang et al., 2013; Indiveri & Liu, 2015). These approaches aim to reduce energy consumption and, more importantly, to achieve ternary computation aligned with physical, living, and continuous dynamics.
Does a ternary architecture make sense even without dedicated ternary hardware? Despite this limitation, it does, because architecture precedes physical substrate. By designing ternary systems, we reveal the inability of binary logic to reflect a dynamic world. At the same time, ternary architectures such as Neuraxon–Aigarth can already yield improvements on existing binary hardware by reducing unnecessary activity.
References
Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., & Friston, K. J. (2009). The dynamic brain: From spiking neurons to neural masses and cortical fields. PLoS Computational Biology, 5(8), e1000092.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
Indiveri, G., & Liu, S.-C. (2015). Memory and information processing in neuromorphic systems. Proceedings of the IEEE, 103(8), 1379–1397.
Northoff, G. (2018). The spontaneous brain: From the mind–body problem to a neurophenomenology. MIT Press.
Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.
Yang, J. J., Strukov, D. B., & Stewart, D. R. (2013). Memristive devices for computing. Nature Nanotechnology, 8(1), 13–24.
#aigarth #trinary
Visualizza traduzione
Neural Networks in AI and Neuroscience: How the Brain Inspires Artificial IntelligenceWritten by $Qubic Scientific Team Neuraxon Intelligence Academy — Volume 4 The word network shows up constantly in both neuroscience and artificial intelligence. But despite sharing the same label, biological neural networks and artificial neural networks are fundamentally different systems. To understand what each one actually does, and where a third approach fits in, we need to look at the architecture and behavior of networks at every level. Biological Neural Networks: How the Brain Processes Information A biological neural network is a system of interconnected neurons whose function is to process information and generate behavior. These networks are dynamic. They stay active over time, even when we are not consciously engaged in any task. They carry an energetic cost, which in the case of the human brain is remarkably low for the complexity it produces. Biological networks integrate both internal and external signals using their own language: time-frequency. Think of a musical band with multiple instruments playing at different rhythms. The bass drum carries the tempo, the bass plays two notes per beat, and the cymbals fill in the sixteenth notes. The melody moves freely without losing the beat. The musicians couple their scores at different rhythms that fit together perfectly. These are nested frequencies, and this is exactly how brain networks function. The time-frequency language of different networks nests within itself, a concept known as cross-frequency coupling. From Single Neurons to Massive Networks Everything begins with the neuron. That single nerve cell generates an action potential, a brief electrical impulse that propagates along the axon. The neuron receives signals through the dendrites, integrates them in the soma, and transmits the signal if it surpasses a threshold. We covered this process in detail in NIA Volume 1: Why Intelligence Is Not Computed in Steps, but in Time and NIA Volume 2: Ternary Dynamics as a Model of Living Intelligence. Neurons connect to other neurons through chemical synapses, where neurotransmitters are released (see NIA Volume 3: Neuromodulation and Brain-Inspired AI), or through electrical synapses, where current passes directly between cells. To form networks, many neurons interconnect and create recurrent circuits. But this integration is non-linear, meaning the response of the whole does not equal the simple sum of its parts. The magnitude is staggering: the human brain contains approximately 86 billion neurons and somewhere between 10¹⁴ and 10¹⁵ synapses (Azevedo et al., 2009). Small-World Properties and Excitation-Inhibition Balance At the topological level, these networks display small-world properties: high local clustering combined with short global connections. This architecture enables efficient communication across the brain while maintaining specialized local processing. The functioning of biological neural networks depends on the balance between excitation and inhibition. If excitation dominates, activity destabilizes. If inhibition dominates, the network goes silent. Dynamic stability arises from the balance between both forces. This balance is maintained through synaptic plasticity, the mechanism that allows the strength of connections to change based on experience. On top of that, neuromodulation adjusts circuit gain, controlling how strongly an input produces an output (Marder, 2012). In a threatening situation, for example, noradrenaline increases sensory sensitivity and the capacity for rapid learning. Multiple Temporal Scales and Cerebral Cortex Brain Function Networks operate at multiple temporal scales simultaneously. At the neuronal level, action potentials fire in milliseconds. Neuronal oscillations unfold in seconds. Synaptic changes develop over hours or days, and structural reorganization happens across years. Everything works in a harmonic, dynamic, and intertwined pattern. But not everything communicates with everything without structure. The cerebral cortex brain function is organized into specialized networks. The most important include the default mode network, linked to self-reference and thinking about the self and others; the central executive network, linked to direct task execution; the salience network, which detects what is relevant at each moment and allows switching between different modes; the sensorimotor network that sustains voluntary movements; and various attention networks. Humans also possess a distinctive language network, enabling both comprehension and production of language. In biological networks, no isolated note is a symphony. The symphony emerges from the dynamic pattern of relationships between notes. The brain does not contain things. It does not store memories the way a hard drive stores files. The brain constructs dynamic configurations. Courtesy from DOI: 10.3389/fnagi.2023.1204134 Artificial Neural Networks: How Deep Learning Models Work An artificial neural network (ANN) is a mathematical model designed to approximate complex functions from data. It draws abstract inspiration from the brain: it uses interconnected units called "artificial neurons," but these are not cells. They are algebraic operations. Calling an algebraic operation a neuron is arguably an exaggerated extrapolation, and calling language prediction "intelligence" may be equally misleading. But since these are the established terms, it is important to understand them and separate substance from hype. How an Artificial Neuron Works Each artificial neuron performs three steps. First, it receives a set of numerical inputs. Then it multiplies each input by a synaptic weight, which is an adjustable parameter. Finally, it sums the results and applies an activation function that introduces non-linearity. Common activation functions include the Sigmoid, which compresses values between 0 and 1, and ReLU (Rectified Linear Unit), which cancels negative values and lets positive ones pass through. Without non-linearity, the network would simply perform a linear transformation, incapable of modeling complex patterns. ANNs are organized into input layers, where data enter; hidden layers, where data are progressively transformed; and an output layer, which generates the prediction. From the Perceptron to Deep Learning All modern architectures trace their origins to the perceptron (Rosenblatt, 1958), a simple linear neuron with a threshold. Modern deep learning networks can contain hundreds of layers and billions of parameters. But at their core, an ANN functions like an enormous automated spreadsheet that adjusts millions of numerical cells until the output matches the expected result. Backpropagation and Gradient Descent: How Artificial Networks Learn Learning in artificial networks does not work the way biological learning does. There is no adjustment of neuromodulators or synaptic intensity based on lived experience. Instead, learning is based on minimizing an error function that quantifies the difference between the network's prediction and the correct answer. Consider a simple example: the model is asked to complete "Paris is the capital of..." If the prediction is Italy, the error function measures the gap between Italy and France, then adjusts the weights accordingly. The central mechanism behind this adjustment is backpropagation (Rumelhart et al., 1986). This algorithm calculates the error at the output, propagates that error backward layer by layer, and adjusts the weights using gradient descent, a mathematical method that modifies parameters in the direction that reduces the error. Formally, learning consists of optimizing a differentiable function in a space of many dimensions. If you think of physical space, the dimensions are x, y, and z. But in language, imagine dimensions like singular, plural, feminine, masculine, verb, subject, attribute, noun, adjective, intonation, and synonym. Introduce millions of dimensions and enough computational power, and a model can learn that Paris is the capital of France simply by reducing prediction errors during training. Architectures of Artificial Neural Networks Although the terminology overlaps with neuroscience, the process does not resemble how a living system learns. In an ANN, adjustment depends on global calculation and explicit knowledge of the final error. The network needs to know exactly how wrong it was. If a network learns to recognize cats, it receives thousands or millions of labeled images. Each time it fails, it slightly adjusts the weights. After millions of iterations, the internal pattern stabilizes into a configuration that discriminates cats from other objects. The process is purely statistical. The network does not "understand" what a cat is. It detects numerical correlations in pixels. It does not hold a "world model" of a cat, only matrices of numbers on massive scales. For a deeper look at why this matters, read our analysis of benchmarking world model learning. There are several key architectures of artificial neural networks. Convolutional networks (CNNs) use spatial filters that detect edges, textures, and hierarchical patterns, making them essential for computer vision. Recurrent networks (RNNs, LSTMs) incorporate temporal memory for processing sequences. And the now-dominant Transformers use attention mechanisms that dynamically weight which parts of the input are most relevant (Vaswani et al., 2017). Transformers currently power most large language models in natural language processing. The growth of these networks does not happen organically as in living systems. It happens through explicit design and parameter scaling via massive training in high-performance computing centers. Adaptation is limited to the training period. Once trained, the network does not spontaneously reorganize its architecture. Any modification requires a new optimization process. As we explored in That Static AI Is a Dead End, this frozen nature is a fundamental limitation of current AI systems. Despite sharing the name "network," the similarity between artificial and biological neural networks is limited. The analogy is structural and abstract: both use interconnected units and learning through adjustment of connections. But the brain is an evolutionary, embodied, and self-regulated system. An ANN is a function optimizer in a numerical space. Between Biological and Artificial Networks: How Neuraxon Aigarth Bridges the Gap The networks simulated in Neuraxon Aigarth are conceptually positioned between biological networks and conventional artificial neural networks. They are not living tissue, but they are not merely mathematical functions optimized by gradient either. Their objective is to approximate dynamics typical of biological systems, including multiscale plasticity, context-dependent modulation, and self-organization, all within a computational framework built for Qubic's decentralized AI infrastructure. If in Volume 1 we described self-organized metabolic systems and in Volume 2 we explored differentiable optimizing functions, Neuraxon attempts to incorporate dynamic properties of the former without abandoning the mathematical formalization of the latter. Trivalent States: Capturing Excitation-Inhibition Balance Instead of typical continuous activations (real values after a ReLU, for example), Neuraxon uses trivalent states: -1, 0, and +1. Here, +1 represents excitatory activation, -1 represents inhibitory activation, and 0 represents rest or inactivity. This scheme does not attempt to copy the biological action potential. Rather, it captures the functional principle of excitation-inhibition balance described in the biological networks section above. In the brain, stability emerges from the balance between these forces. In Neuraxon, the discrete state space imposes a dynamic closer to state-transition systems than to simple continuous transformations. In contrast to classical artificial networks, where activation is a floating-point number without physiological meaning, the trivalent system imposes structural constraints that shape how activity propagates through the network. Dual-Weight Plasticity: Fast and Slow Learning Biological neural networks exhibit plasticity at different temporal scales: rapid changes in synaptic efficacy and slower consolidation over time. Neuraxon introduces this idea through two weight components: w_fast: rapid changes that are sensitive to the immediate environment. w_slow: slow changes that stabilize repeated patterns over time. This prevents the system from depending exclusively on a homogeneous weight update like standard backpropagation. Part of learning can be transient, while another part is gradually consolidated. This mechanism introduces a dimension absent in most artificial neural networks: the learning rate is not fixed, but dependent on the global state of the system. Contextual Neuromodulation Through the Meta Variable In biological networks, neuromodulators such as noradrenaline and dopamine do not transmit specific informational content. Instead, they alter the gain and plasticity of broad neuronal populations. We explored this in depth in NIA Volume 3: Neuromodulation and Brain-Inspired AI. In Neuraxon, the variable meta plays a functionally analogous role. It does not encode specific information, but modifies the magnitude of synaptic updating. This approximates the biological principle that learning depends on motivational or salience context. In a conventional artificial network, the gradient is applied uniformly based on error. In Neuraxon, learning can be intensified or attenuated according to internal state or global external signals. The conceptual difference is significant. In classical deep learning networks, error drives learning. In Neuraxon, error can coexist with a contextual modulatory signal that alters how much is learned at any given moment. Self-Organized Criticality and Adaptive Behavior Biological networks operate near a regime called self-organized criticality, where the system maintains equilibrium between order and chaos. This regime allows flexibility without loss of stability. Neuraxon models this property by allowing the network to evolve toward intermediate dynamic states in which small perturbations can produce broad reorganizations without collapsing the system. In models such as the Game of Life extended with proprioception that the team is currently developing, the system can receive external signals (environment) and internal signals (its own state, energy, previous collisions). If an agent repeatedly collides with an obstacle, an increase in the meta signal may be generated, analogous to an increase in arousal. That signal temporarily increases plasticity, facilitating structural reorganization. Here, the network does not learn only because it makes mistakes. It learns because the environment acquires adaptive relevance. The similarity with the brain remains limited: Neuraxon does not possess biology, metabolism, or subjective experience. However, it introduces dynamic dimensions absent in most conventional artificial neural networks, positioning it as a genuinely novel approach to brain-inspired AI on decentralized infrastructure. The computational power required to run Neuraxon simulations is provided by Qubic's global network of miners through Useful Proof of Work, turning AI training into the consensus mechanism itself. Scientific References #Azevedo, F. A. C., et al. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513(5), 532-541. DOI: 10.1002/cne.21974 #Marder, E. (2012). Neuromodulation of neuronal circuits: Back to the future. Neuron, 76(1), 1-11. DOI: 10.1016/j.neuron.2012.09.010 #Rosenblatt, F. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386-408. DOI: 10.1037/h0042519 #Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536. DOI: 10.1038/323533a0 #Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. arXiv: 1706.03762 Brain network images courtesy from: DOI: 10.3389/fnagi.2023.1204134 #Aİ #AGI

Neural Networks in AI and Neuroscience: How the Brain Inspires Artificial Intelligence

Written by $Qubic Scientific Team

Neuraxon Intelligence Academy — Volume 4

The word network shows up constantly in both neuroscience and artificial intelligence. But despite sharing the same label, biological neural networks and artificial neural networks are fundamentally different systems. To understand what each one actually does, and where a third approach fits in, we need to look at the architecture and behavior of networks at every level.
Biological Neural Networks: How the Brain Processes Information
A biological neural network is a system of interconnected neurons whose function is to process information and generate behavior. These networks are dynamic. They stay active over time, even when we are not consciously engaged in any task. They carry an energetic cost, which in the case of the human brain is remarkably low for the complexity it produces.
Biological networks integrate both internal and external signals using their own language: time-frequency. Think of a musical band with multiple instruments playing at different rhythms. The bass drum carries the tempo, the bass plays two notes per beat, and the cymbals fill in the sixteenth notes. The melody moves freely without losing the beat. The musicians couple their scores at different rhythms that fit together perfectly. These are nested frequencies, and this is exactly how brain networks function. The time-frequency language of different networks nests within itself, a concept known as cross-frequency coupling.
From Single Neurons to Massive Networks
Everything begins with the neuron. That single nerve cell generates an action potential, a brief electrical impulse that propagates along the axon. The neuron receives signals through the dendrites, integrates them in the soma, and transmits the signal if it surpasses a threshold. We covered this process in detail in NIA Volume 1: Why Intelligence Is Not Computed in Steps, but in Time and NIA Volume 2: Ternary Dynamics as a Model of Living Intelligence.
Neurons connect to other neurons through chemical synapses, where neurotransmitters are released (see NIA Volume 3: Neuromodulation and Brain-Inspired AI), or through electrical synapses, where current passes directly between cells. To form networks, many neurons interconnect and create recurrent circuits. But this integration is non-linear, meaning the response of the whole does not equal the simple sum of its parts. The magnitude is staggering: the human brain contains approximately 86 billion neurons and somewhere between 10¹⁴ and 10¹⁵ synapses (Azevedo et al., 2009).
Small-World Properties and Excitation-Inhibition Balance
At the topological level, these networks display small-world properties: high local clustering combined with short global connections. This architecture enables efficient communication across the brain while maintaining specialized local processing.
The functioning of biological neural networks depends on the balance between excitation and inhibition. If excitation dominates, activity destabilizes. If inhibition dominates, the network goes silent. Dynamic stability arises from the balance between both forces. This balance is maintained through synaptic plasticity, the mechanism that allows the strength of connections to change based on experience. On top of that, neuromodulation adjusts circuit gain, controlling how strongly an input produces an output (Marder, 2012). In a threatening situation, for example, noradrenaline increases sensory sensitivity and the capacity for rapid learning.
Multiple Temporal Scales and Cerebral Cortex Brain Function
Networks operate at multiple temporal scales simultaneously. At the neuronal level, action potentials fire in milliseconds. Neuronal oscillations unfold in seconds. Synaptic changes develop over hours or days, and structural reorganization happens across years. Everything works in a harmonic, dynamic, and intertwined pattern.
But not everything communicates with everything without structure. The cerebral cortex brain function is organized into specialized networks. The most important include the default mode network, linked to self-reference and thinking about the self and others; the central executive network, linked to direct task execution; the salience network, which detects what is relevant at each moment and allows switching between different modes; the sensorimotor network that sustains voluntary movements; and various attention networks. Humans also possess a distinctive language network, enabling both comprehension and production of language.
In biological networks, no isolated note is a symphony. The symphony emerges from the dynamic pattern of relationships between notes. The brain does not contain things. It does not store memories the way a hard drive stores files. The brain constructs dynamic configurations.
Courtesy from DOI: 10.3389/fnagi.2023.1204134
Artificial Neural Networks: How Deep Learning Models Work
An artificial neural network (ANN) is a mathematical model designed to approximate complex functions from data. It draws abstract inspiration from the brain: it uses interconnected units called "artificial neurons," but these are not cells. They are algebraic operations. Calling an algebraic operation a neuron is arguably an exaggerated extrapolation, and calling language prediction "intelligence" may be equally misleading. But since these are the established terms, it is important to understand them and separate substance from hype.
How an Artificial Neuron Works
Each artificial neuron performs three steps. First, it receives a set of numerical inputs. Then it multiplies each input by a synaptic weight, which is an adjustable parameter. Finally, it sums the results and applies an activation function that introduces non-linearity. Common activation functions include the Sigmoid, which compresses values between 0 and 1, and ReLU (Rectified Linear Unit), which cancels negative values and lets positive ones pass through.
Without non-linearity, the network would simply perform a linear transformation, incapable of modeling complex patterns. ANNs are organized into input layers, where data enter; hidden layers, where data are progressively transformed; and an output layer, which generates the prediction.

From the Perceptron to Deep Learning
All modern architectures trace their origins to the perceptron (Rosenblatt, 1958), a simple linear neuron with a threshold. Modern deep learning networks can contain hundreds of layers and billions of parameters. But at their core, an ANN functions like an enormous automated spreadsheet that adjusts millions of numerical cells until the output matches the expected result.
Backpropagation and Gradient Descent: How Artificial Networks Learn
Learning in artificial networks does not work the way biological learning does. There is no adjustment of neuromodulators or synaptic intensity based on lived experience. Instead, learning is based on minimizing an error function that quantifies the difference between the network's prediction and the correct answer.
Consider a simple example: the model is asked to complete "Paris is the capital of..." If the prediction is Italy, the error function measures the gap between Italy and France, then adjusts the weights accordingly. The central mechanism behind this adjustment is backpropagation (Rumelhart et al., 1986). This algorithm calculates the error at the output, propagates that error backward layer by layer, and adjusts the weights using gradient descent, a mathematical method that modifies parameters in the direction that reduces the error.
Formally, learning consists of optimizing a differentiable function in a space of many dimensions. If you think of physical space, the dimensions are x, y, and z. But in language, imagine dimensions like singular, plural, feminine, masculine, verb, subject, attribute, noun, adjective, intonation, and synonym. Introduce millions of dimensions and enough computational power, and a model can learn that Paris is the capital of France simply by reducing prediction errors during training.
Architectures of Artificial Neural Networks
Although the terminology overlaps with neuroscience, the process does not resemble how a living system learns. In an ANN, adjustment depends on global calculation and explicit knowledge of the final error. The network needs to know exactly how wrong it was.
If a network learns to recognize cats, it receives thousands or millions of labeled images. Each time it fails, it slightly adjusts the weights. After millions of iterations, the internal pattern stabilizes into a configuration that discriminates cats from other objects. The process is purely statistical. The network does not "understand" what a cat is. It detects numerical correlations in pixels. It does not hold a "world model" of a cat, only matrices of numbers on massive scales. For a deeper look at why this matters, read our analysis of benchmarking world model learning.
There are several key architectures of artificial neural networks. Convolutional networks (CNNs) use spatial filters that detect edges, textures, and hierarchical patterns, making them essential for computer vision. Recurrent networks (RNNs, LSTMs) incorporate temporal memory for processing sequences. And the now-dominant Transformers use attention mechanisms that dynamically weight which parts of the input are most relevant (Vaswani et al., 2017). Transformers currently power most large language models in natural language processing.
The growth of these networks does not happen organically as in living systems. It happens through explicit design and parameter scaling via massive training in high-performance computing centers. Adaptation is limited to the training period. Once trained, the network does not spontaneously reorganize its architecture. Any modification requires a new optimization process. As we explored in That Static AI Is a Dead End, this frozen nature is a fundamental limitation of current AI systems.
Despite sharing the name "network," the similarity between artificial and biological neural networks is limited. The analogy is structural and abstract: both use interconnected units and learning through adjustment of connections. But the brain is an evolutionary, embodied, and self-regulated system. An ANN is a function optimizer in a numerical space.
Between Biological and Artificial Networks: How Neuraxon Aigarth Bridges the Gap
The networks simulated in Neuraxon Aigarth are conceptually positioned between biological networks and conventional artificial neural networks. They are not living tissue, but they are not merely mathematical functions optimized by gradient either. Their objective is to approximate dynamics typical of biological systems, including multiscale plasticity, context-dependent modulation, and self-organization, all within a computational framework built for Qubic's decentralized AI infrastructure.
If in Volume 1 we described self-organized metabolic systems and in Volume 2 we explored differentiable optimizing functions, Neuraxon attempts to incorporate dynamic properties of the former without abandoning the mathematical formalization of the latter.
Trivalent States: Capturing Excitation-Inhibition Balance
Instead of typical continuous activations (real values after a ReLU, for example), Neuraxon uses trivalent states: -1, 0, and +1. Here, +1 represents excitatory activation, -1 represents inhibitory activation, and 0 represents rest or inactivity.
This scheme does not attempt to copy the biological action potential. Rather, it captures the functional principle of excitation-inhibition balance described in the biological networks section above. In the brain, stability emerges from the balance between these forces. In Neuraxon, the discrete state space imposes a dynamic closer to state-transition systems than to simple continuous transformations.
In contrast to classical artificial networks, where activation is a floating-point number without physiological meaning, the trivalent system imposes structural constraints that shape how activity propagates through the network.
Dual-Weight Plasticity: Fast and Slow Learning
Biological neural networks exhibit plasticity at different temporal scales: rapid changes in synaptic efficacy and slower consolidation over time. Neuraxon introduces this idea through two weight components:
w_fast: rapid changes that are sensitive to the immediate environment.
w_slow: slow changes that stabilize repeated patterns over time.
This prevents the system from depending exclusively on a homogeneous weight update like standard backpropagation. Part of learning can be transient, while another part is gradually consolidated. This mechanism introduces a dimension absent in most artificial neural networks: the learning rate is not fixed, but dependent on the global state of the system.
Contextual Neuromodulation Through the Meta Variable
In biological networks, neuromodulators such as noradrenaline and dopamine do not transmit specific informational content. Instead, they alter the gain and plasticity of broad neuronal populations. We explored this in depth in NIA Volume 3: Neuromodulation and Brain-Inspired AI.
In Neuraxon, the variable meta plays a functionally analogous role. It does not encode specific information, but modifies the magnitude of synaptic updating. This approximates the biological principle that learning depends on motivational or salience context. In a conventional artificial network, the gradient is applied uniformly based on error. In Neuraxon, learning can be intensified or attenuated according to internal state or global external signals.
The conceptual difference is significant. In classical deep learning networks, error drives learning. In Neuraxon, error can coexist with a contextual modulatory signal that alters how much is learned at any given moment.
Self-Organized Criticality and Adaptive Behavior
Biological networks operate near a regime called self-organized criticality, where the system maintains equilibrium between order and chaos. This regime allows flexibility without loss of stability.
Neuraxon models this property by allowing the network to evolve toward intermediate dynamic states in which small perturbations can produce broad reorganizations without collapsing the system.
In models such as the Game of Life extended with proprioception that the team is currently developing, the system can receive external signals (environment) and internal signals (its own state, energy, previous collisions). If an agent repeatedly collides with an obstacle, an increase in the meta signal may be generated, analogous to an increase in arousal. That signal temporarily increases plasticity, facilitating structural reorganization.
Here, the network does not learn only because it makes mistakes. It learns because the environment acquires adaptive relevance. The similarity with the brain remains limited: Neuraxon does not possess biology, metabolism, or subjective experience. However, it introduces dynamic dimensions absent in most conventional artificial neural networks, positioning it as a genuinely novel approach to brain-inspired AI on decentralized infrastructure.
The computational power required to run Neuraxon simulations is provided by Qubic's global network of miners through Useful Proof of Work, turning AI training into the consensus mechanism itself.

Scientific References
#Azevedo, F. A. C., et al. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513(5), 532-541. DOI: 10.1002/cne.21974
#Marder, E. (2012). Neuromodulation of neuronal circuits: Back to the future. Neuron, 76(1), 1-11. DOI: 10.1016/j.neuron.2012.09.010
#Rosenblatt, F. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386-408. DOI: 10.1037/h0042519
#Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536. DOI: 10.1038/323533a0
#Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. arXiv: 1706.03762
Brain network images courtesy from: DOI: 10.3389/fnagi.2023.1204134
#Aİ #AGI
Luck3333
·
--
Scopri $QUBIC: Il futuro della blockchain senza commissioni, ultra-veloce, alimentata da AI & Innovazione TickChain!
Stai cercando un progetto blockchain davvero innovativo che affronti le limitazioni attuali e introduca una nuova era di applicazioni pratiche? Non cercare oltre $QUBIC – una rete che non è solo veloce e senza commissioni ma integra anche profondamente l'Intelligenza Artificiale (AI) nelle sue operazioni fondamentali. Non è solo un'altra criptovaluta; è un ecosistema completo pronto a ridefinire il futuro della tecnologia blockchain.
1. Cos'è $QUBIC? Blockchain più intelligente & Il concetto di TickChain
$QUBIC è una rete blockchain avanzata, contraddistinta dalle sue transazioni senza commissioni e velocità ultra-rapide (auditate da Certik). Ciò che distingue Qubic è il suo meccanismo pionieristico "Useful Proof-of-Work" (UPOW), dove i "computor" (minatori) non solo proteggono la rete ma addestrano anche attivamente una massiccia rete neurale (AI).
Accedi per esplorare altri contenuti
Esplora le ultime notizie sulle crypto
⚡️ Partecipa alle ultime discussioni sulle crypto
💬 Interagisci con i tuoi creator preferiti
👍 Goditi i contenuti che ti interessano
Email / numero di telefono
Mappa del sito
Preferenze sui cookie
T&C della piattaforma