Compute tokens: Economics of distributed AI

Andrey Kuznetsov · 15.06.2025, 19:16:50

Compute tokens: Economics of distributed AI


Author: Andrey Kuznetsov | Crypto-Economics Researcher | Advisor to Render Network and Akash

NVIDIA's market cap crossed $3 trillion in 2024. One company, making one product, became one of the most valuable entities on Earth. The reason is simple: AI needs compute, compute needs GPUs, and NVIDIA makes the GPUs.

This concentration is a problem. Crypto might be the solution. Welcome to the economics of compute tokens.

The compute bottleneck

Training GPT-4 required an estimated $100 million in compute costs. The next generation will cost more. AI capabilities scale with compute, and compute is scarce.

The scarcity isn't just hardware. It's access. NVIDIA sells to hyperscalers first — Amazon, Google, Microsoft. Startups wait in line. Researchers make do with leftovers. The compute hierarchy mirrors the power hierarchy.

Meanwhile, enormous compute sits idle. Gaming PCs with powerful GPUs. Data centers with spare capacity. Crypto miners with hardware seeking new purpose. The resource exists — it's just not accessible.

Distributed compute networks aim to unlock this. Tokenize the resource. Create markets. Let supply meet demand without centralized gatekeepers.

How compute tokens work

The basic model is straightforward.

Providers contribute compute — GPUs, CPUs, storage, bandwidth. They register on a network, prove their capacity, and offer resources for rent.

Consumers need compute — AI training, inference, rendering, scientific simulation. They pay tokens to access provider resources.

The network coordinates matching. Who has what capacity? Who needs what resources? How do you verify work was actually done? How do you handle disputes?

Tokens serve multiple functions. Payment for services. Staking for provider reliability. Governance for protocol decisions. The economics get complex, but the core is simple: tokenized compute marketplace.

The major players

Several projects are building this infrastructure, each with different approaches.

Render Network focuses on GPU rendering — visual effects, 3D graphics, creative applications. Artists submit jobs, GPU owners render them, tokens flow. They've processed millions of frames for actual production work. Real usage, not just speculation.

Akash Network is a decentralized cloud. General-purpose compute, not just GPUs. Deploy containers, run applications, pay with AKT tokens. Costs roughly 85% less than AWS for equivalent resources. The discount drives adoption.

io.net aggregates GPU clusters specifically for AI workloads. They partner with existing data centers, crypto miners, and consumer GPU owners. Scale through aggregation rather than individual providers.

Gensyn tackles the hardest problem: distributed AI training. Training across heterogeneous, unreliable nodes is technically brutal. They're building verification systems to ensure training runs complete correctly even when individual providers fail or cheat.

Bittensor takes a different angle — a marketplace not for raw compute but for AI capabilities. Miners run models and compete to provide the best responses. Tokens reward quality, not just quantity.

The economics actually work (sometimes)

Let me be precise about where this makes economic sense.

Inference workloads benefit most. Running a trained model doesn't require tight coordination between GPUs. Latency tolerance varies by application. Distributed networks can genuinely compete on price.

Batch processing works well. Rendering, transcoding, scientific simulation — jobs that can be parallelized and don't need real-time response. These are ideal for distributed compute.

Training is harder. Large model training requires fast interconnects between GPUs — NVLink, InfiniBand. Consumer hardware can't match data center networking. Distributed training networks are making progress, but physics limits how far they can go.

The honest comparison: distributed compute beats cloud pricing for suitable workloads. It doesn't beat having your own data center. The target market is people currently paying hyperscaler rates, not people with their own infrastructure.

The token value question

Here's where I put on my economist hat.

Compute tokens face a fundamental challenge: they're utility tokens trying to maintain value while being spent. If the token is just a payment method, why wouldn't it trade near its utility value? What creates sustainable token appreciation?

Several mechanisms attempt to solve this.

Staking requirements lock supply. Providers must stake tokens to participate. More providers mean more locked tokens. Supply decreases, price should increase. Works until it doesn't — provider economics must still make sense.

Burn mechanisms create deflationary pressure. Some portion of fees gets burned. Circulating supply decreases over time. Theoretically supportive of price, but only if demand grows faster than new emissions.

Governance rights provide non-monetary value. Tokens vote on protocol direction. Large holders influence the system's future. This matters more as the protocol becomes more important.

Speculation provides liquidity but also volatility. People buying tokens hoping for appreciation enable the market to function but create price disconnects from fundamental value.

My view: compute token economics are still experimental. Some will find sustainable equilibria. Many won't. Invest accordingly.

The regulatory landscape

Compute tokens occupy an uncertain regulatory position.

Are they securities? If purchased primarily for investment with expectation of profit from others' efforts — maybe. The SEC hasn't clarified, but the risk exists.

Are they commodities? Compute is arguably a commodity. Tokens representing compute access might be commodity derivatives. CFTC jurisdiction possible.

Are they just software licenses? Payment for services, like AWS credits? This framing is most favorable but requires the token to function purely as utility.

Most projects try to structure as pure utility tokens. No promises of appreciation. No investment marketing. Just payment for compute services. Whether regulators accept this framing remains to be seen.

What I'm watching in 2025

Real usage metrics matter most. Not tokens staked, not market cap — actual compute consumed, actual jobs processed, actual dollars of value delivered. The projects with growing usage will survive. The rest are speculation.

Enterprise adoption signals maturity. When companies put production workloads on distributed compute — not experiments, not proofs-of-concept, but real business applications — that's validation. Watch for Fortune 500 announcements.

AI compute demand will explode. Every company wants AI capabilities. Few can afford hyperscaler prices. The gap between demand and affordable supply is the opportunity distributed compute addresses.

Consolidation is likely. Too many projects, not enough differentiation. Some will merge, some will die, a few will dominate. The market can't sustain twenty compute networks.

The compute commodity thesis will be tested. If distributed networks can reliably deliver compute at scale, they become essential infrastructure. If they can't, they remain niche curiosities. 2025 will provide answers.

Andrey Kuznetsov researches token economics and decentralized infrastructure. He advises Render Network and Akash on economic mechanism design and previously worked at a16z crypto.

#AI


Related posts

AI agents in crypto: Automating the future of trading
Hybrid Intelligence: Why the future of software belongs to human-AI teams
Scroll down to load next post