Nvidia’s Vera Rubin cuts AI costs, which is a problem for decentralised GPU networks like Render that rely on scarce and unused processing resources.
Powerful computer Nvidia’s Rubin platform can make running advanced AI models cheaper, which goes against the idea that crypto networks are meant to make money by using limited GPU power.
Rubin, Nvidia’s new processing architecture, was officially released on Monday at CES 2026. It makes training and operating AI models more efficient. Nvidia CEO Jensen Huang revealed that it is now in “full production” and is made up of six co-designed chips that are sold under the name Vera Rubin in honour of the American astronomer Vera Florence Cooper Rubin.
For crypto initiatives that are based on the idea that computing power would stay limited, those advances can make their models less useful.
But in the past, gains in computing efficiency have mostly led to more demand, not less. Cheaper and more powerful computers have continually opened up new workloads and use cases, which has led to increased total utilisation even as costs have gone down.
Some investors seem to be betting that this is still true, since GPU-sharing tokens like Render RENDER$2.12, Akash AKT$0.41, and Golem GLM$0.26 have all gone up more than 20% in the last week.
Rubin’s biggest efficiency advantages are hyperscale data centres. That means that blockchain-based computing networks will have to compete for short-term tasks and workloads that aren’t in AI factories.
Efficiency gains challenge scarcity-based crypto narratives
Cloud computing is a modern illustration of how efficiency may increase demand. Amazon Web Services and other providers made it easier and cheaper for developers and businesses to get access to computing power. This led to a huge increase in the number of new workloads, which in turn used more computing power.
That goes against the common sense idea that becoming more efficient should lower demand. If each activity uses fewer resources, there should be fewer servers or GPUs needed.
In computing, it’s not very common. As prices go down, more people start using the service, current customers run higher workloads, and whole new applications become possible.
This is called the “Jevons Paradox” in economics. William Stanley Jevons spoke about it in his 1865 book “The Coal Question.” The English economist noted that enhancements in coal efficiency did not result in diminished fuel consumption but rather increased industrial usage.
When used to crypto-based compute networks, client demand can change to short-term, flexible workloads that don’t work with long-term hyperscale contracts.
That means that networks like Render, Akash, and Golem are really competing on how flexible they are. Their value comes from bringing together idle or underused GPUs and sending short-lived jobs to places where there is enough capacity. This strategy works well when demand is high, but it doesn’t rely on having the most powerful technology.
Render and Akash are decentralised GPU rendering platforms where people may rent GPU power to do things that require a lot of processing power, such 3D rendering, visual effects, or even teaching AI. They let people use GPU compute without having to pay for dedicated infrastructure or hyperscale pricing structures. Golem, on the other hand, is a decentralised marketplace for GPU resources that aren’t being used.
Decentralised GPU networks can handle batch workloads well, but they can’t ensure the predictability, tight synchronisation, and long-term availability that hyperscalers are meant to give.
Structural supply constraints limit hyperscale dominance
There will be a shortage of GPUs until at least 2026. This is because the parts needed to make them are hard to find. Fusion Worldwide, a company that sells computer parts, says that high-bandwidth memory (HBM), which is an important aspect of modern AI GPUs, will be hard to find until at least 2026. HBM shortages directly restrict the sales of high-end GPUs, as they are essential for training and executing large AI models.
The issue originates at the highest level of the semiconductor supply chain. SK Hynix and Micron, two of the biggest HBM makers in the world, have both claimed that all of their production for 2026 is already sold out. Samsung has also issued a warning, predicting double-digit price increases due to a shortage of supply.
People used to blame crypto miners for GPU shortages, but now the AI explosion is making the supply chain go this way. Hyperscalers and AI labs are locking up memory, packaging, and wafers for several years to make sure they have enough capacity in the future. This leaves little room for anything else in the market.
That constant lack of resources is one reason why decentralised compute markets can keep going. Render, Akash, and Golem work outside of the hyperscale supply chain. They collect GPUs that aren’t being used enough and let people use them on short-term, flexible contracts.
While they don’t fix supply bottlenecks, they do provide developers and workloads that struggle to find sufficient space in tightly controlled AI data centres with an alternative way to access it.
Bitcoin miners reposition infrastructure for AI workloads
The AI explosion is also changing the crypto mining business. At the same time, Bitcoin’s (BTC $90,016) economics are changing every four years since halvings lower block rewards.
Several miners are looking into what their infrastructure is best for. Modern AI data centres need a lot of electricity, cooling, and space, which is quite similar to what large mining operations require. Hyperscalers are locking up a lot of the available GPU supply, which makes those assets more desirable for AI and high-performance computing tasks.
You can already see that change. Bitfarma announced in November that it would turn part of its mining operation in Washington State into an AI and high-performance computing centre to support Nvidia’s Vera Rubin systems. Since the last halving, numerous competitors have switched to AI.
Nvidia’s Vera Rubin doesn’t remove scarcity, but it does make the technology perform better in hyperscale data centres, where access to GPUs, RAM, and networking is already quite limited. The supply problems, especially those with HBM, are likely to last all year.
GPUs opens up opportunities for decentralised computing networks to cover market gaps by handling workloads that can’t get long-term contracts or dedicated capacity in AI factories. These networks are not a replacement for hyperscale infrastructure; they are instead a way to get short-term jobs and flexible access to computing power during the AI boom.


