Decentralized AI: Who owns the models?

Emma Thompson · 01.12.2025, 19:08:51

Decentralized AI: Who owns the models?


Interview with Emma Thompson | AI Ethics Researcher | Fellow at Oxford Internet Institute

OpenAI, Google, Anthropic, Meta — a handful of companies control the most powerful AI systems ever built. Emma Thompson has spent a decade studying technology governance. She sees decentralized AI as both an alternative and a necessity.

2049.news: Why does AI ownership matter?

Emma Thompson: Because AI is becoming infrastructure. It's not just a product — it's the foundation other products build on. Search, communication, creativity, decision-making — all increasingly mediated by AI systems.

When infrastructure is controlled by a few private entities, they gain enormous power. They decide what's allowed, what's censored, what's possible. They shape how billions of people think, create, and interact.

We've seen this movie before with social media. A few platforms became the public square, then used that position to optimize for engagement over truth. AI concentration could be worse because the systems are more capable and less transparent.

2049.news: What does "decentralized AI" actually mean?

Emma Thompson: It's a spectrum, not a binary.

At one end: fully open models. Weights published, anyone can run them, no central control. Meta's Llama, Mistral's models, Stability AI's image generators. You download them, run them locally, modify them freely.

Middle ground: distributed training and inference. Projects like Gensyn, Together AI, and Bittensor let people contribute compute to train or run models collectively. No single entity controls the infrastructure.

Further along: community-governed AI. Models developed through democratic processes, with stakeholders voting on capabilities, restrictions, and development priorities. This barely exists yet, but it's where the interesting experiments are happening.

The key question at each level: who makes decisions, and how?

2049.news: What's the case against centralized AI?

Emma Thompson: Multiple overlapping concerns.

Censorship and bias. Every major AI provider implements content policies. Reasonable in principle — nobody wants AI helping with terrorism. But policies reflect the values of whoever writes them. American tech companies impose American norms globally. Topics that are controversial in San Francisco get restricted everywhere.

Single points of failure. If OpenAI goes bankrupt, gets acquired, or changes policies, millions of applications break overnight. Building on centralized infrastructure means accepting that dependency.

Surveillance potential. Centralized providers see every query. They know what you're thinking, creating, investigating. Even with privacy policies, that data exists and can be subpoenaed, hacked, or misused.

Innovation constraints. When one company controls the platform, they decide what can be built. Anything that threatens their business model gets banned. Decentralized systems enable permissionless innovation.

2049.news: But doesn't decentralization have risks too?

Emma Thompson: Absolutely. I'm not naive about this.

Open models enable misuse. If anyone can run an uncensored model locally, anyone includes bad actors. We've already seen open models used to generate CSAM, synthesize harmful information, and create deepfakes at scale.

Governance is hard. "The community decides" sounds democratic until you realize most communities are tiny minorities of engaged participants while everyone else ignores governance entirely. Sound familiar? It's the DAO problem applied to AI.

Quality matters. Centralized labs have billions in funding, top researchers, massive compute. Open-source efforts are catching up but still lag on frontier capabilities. If decentralized AI is significantly worse, people won't use it regardless of ideology.

The honest position is: both centralization and decentralization have costs. We need to choose consciously rather than defaulting to whoever builds fastest.

2049.news: Where does crypto fit into this?

Emma Thompson: Crypto provides coordination mechanisms that decentralized AI needs.

Compute incentives. Training large models requires enormous compute. Crypto tokens can incentivize people to contribute GPUs to distributed training. Bittensor, Render Network, Akash — they're building marketplaces for AI compute.

Data compensation. Models are trained on human-created data. Currently, creators get nothing while AI companies capture all value. Crypto enables micropayments, royalty tracking, and data DAOs where contributors are compensated.

Governance infrastructure. On-chain voting, token-weighted decisions, quadratic funding for AI research — crypto governance experiments directly apply to governing AI development.

Model ownership. NFTs representing model weights, fractional ownership of training runs, tradeable inference rights. Weird ideas, but crypto makes them possible.

I'm skeptical of crypto hype generally, but the intersection with AI governance is genuinely interesting.

2049.news: What decentralized AI projects are you watching?

Emma Thompson: A few stand out.

Bittensor is ambitious — a decentralized network where miners compete to provide AI capabilities. It's messy and speculative, but the architecture is interesting. They're trying to create market incentives for open AI development.

Hugging Face isn't blockchain-based, but it's the most successful open AI infrastructure. Model hub, datasets, spaces for deployment. Community-driven but still centralized — an interesting middle ground.

EleutherAI deserves credit. Non-profit, open research collective that trained GPT-Neo and GPT-J before open-source models were cool. Proved you don't need a billion-dollar lab to do meaningful AI research.

On the crypto side, Gensyn's distributed training protocol is technically sophisticated. If they can make it work efficiently, it changes the economics of who can train large models.

2049.news: Where do you think we end up in five years?

Emma Thompson: Probably a mixed ecosystem.

Centralized labs will still lead on frontier capabilities. They have the resources, talent, and momentum. GPT-6 or whatever won't come from a decentralized collective.

But open models will be good enough for most applications. The gap between frontier and open-source is narrowing. For 90% of use cases, a free model running locally beats paying for API access to a marginally better one.

Regulation will favor centralization in some jurisdictions and decentralization in others. The EU will probably require licensed AI providers. Some countries will embrace open AI as competitive advantage against American tech giants.

The optimistic scenario: we develop both centralized and decentralized AI with healthy competition between them. Neither monopolizes. Users have choices. That requires active effort to maintain.

The pessimistic scenario: centralized labs pull ahead decisively, open-source becomes irrelevant, and we end up with two or three companies controlling humanity's most powerful technology. That's the default path without intervention.

I'm working to make the optimistic scenario more likely. It won't happen automatically.

Emma Thompson researches AI governance and technology ethics at Oxford Internet Institute. She advises governments and civil society organizations on AI policy and previously led research at the Partnership on AI.

#AI


Related posts

AI content detection: Can blockchain prove what's real?
AI agents in crypto: Automating the future of trading
Scroll down to load next post