On-chain AI: When smart contracts meet machine learning

Viktor Semenov · 15.07.2025, 19:29:22

On-chain AI: When smart contracts meet machine learning


Author: Viktor Semenov | Blockchain AI Researcher | Core Contributor at Modulus Labs

Smart contracts are deterministic. Given the same input, they produce the same output, every time. That's the whole point — trustless execution that anyone can verify.

Machine learning is probabilistic. Models make predictions. Outputs vary. The same input might produce different results depending on model version, random seeds, floating-point precision.

These two paradigms seem incompatible. Yet combining them might be the most important infrastructure challenge of the next decade.

Why on-chain AI matters

DeFi needs intelligence. Lending protocols need to assess credit risk. AMMs need dynamic fee adjustment. Insurance needs claim verification. Governance needs proposal analysis.

Currently, this intelligence comes from off-chain oracles. A centralized service runs the model, signs the output, posts it on-chain. The blockchain trusts the oracle.

But oracle trust undermines blockchain's value proposition. You've just reintroduced centralized points of failure. The oracle operator can manipulate outputs. The model can be biased. Nobody can verify the computation actually happened correctly.

True on-chain AI means verifiable inference. Anyone can confirm the model ran correctly on the stated inputs. No trusted intermediaries. The blockchain's security guarantees extend to AI computation.

The technical challenge

Running AI models on blockchain is computationally absurd.

A small neural network might require millions of floating-point operations. Ethereum processes roughly 15 transactions per second. The entire network's compute capacity couldn't run a single model inference in reasonable time.

Even if you could run it, the cost would be prohibitive. Ethereum gas prices would make a single inference cost thousands of dollars. Nobody's paying that for a credit score.

The naive approach doesn't work. We need something smarter.

Zero-knowledge ML

Here's where it gets interesting.

Zero-knowledge proofs let you prove computation happened correctly without re-executing it. I can prove I ran Model X on Input Y and got Output Z, and you can verify my proof in milliseconds regardless of how long the original computation took.

Applied to ML: run the model off-chain, generate a ZK proof of correct execution, submit proof on-chain. The blockchain verifies the proof, not the computation. Verification is cheap even when computation is expensive.

This is what my team at Modulus Labs builds. We've demonstrated ZK proofs for neural networks with millions of parameters. The proofs are large and generation is slow, but verification is fast and cheap.

EZKL, Risc Zero, and others are building similar infrastructure. The race is on.

What's possible today

Let me be honest about current limitations.

Small models work now. Decision trees, logistic regression, small neural networks — models with thousands to low millions of parameters can be proven efficiently. Inference proofs generate in minutes, verify in seconds.

Large models are coming. We've demonstrated proofs for GPT-2 scale models, but proof generation takes hours. Not practical yet, but proving feasibility matters.

Specific architectures help. Some model types are more ZK-friendly than others. Transformers are harder than CNNs. Attention mechanisms are expensive to prove. Architecture choices affect provability.

Approximate verification is a middle ground. Instead of proving exact computation, prove that output is within acceptable bounds. Reduces proof complexity at cost of precision guarantees.

Applications starting to emerge

Despite limitations, real applications are launching.

On-chain credit scoring. Prove that a lending decision followed from applying a specific model to specific inputs. Borrowers can verify they weren't discriminated against. Regulators can audit algorithmic decisions.

Verifiable AI agents. As AI agents manage on-chain assets, proving they follow stated strategies becomes critical. ZK proofs can verify that Agent X actually ran Strategy Y, not something the operator substituted.

Fraud detection in DeFi. Models that identify suspicious transactions can run verifiably. Protocols can automatically respond to proven threats without trusting a centralized security team.

Dynamic protocol parameters. AMM fees, liquidation thresholds, interest rates — parameters that currently require governance votes could adjust automatically based on verifiable model outputs.

Gaming and prediction markets. Prove that game AI behaved according to rules. Verify that prediction models were applied consistently. Eliminate disputes about fairness.

The oracle hybrid approach

Pure on-chain AI isn't the only path. Hybrid architectures offer pragmatic alternatives.

Optimistic ML follows the optimistic rollup pattern. Accept oracle outputs by default. Anyone can challenge by submitting a fraud proof. If the proof shows incorrect computation, the oracle is slashed.

This reduces costs dramatically. You only generate expensive proofs when disputes occur. In practice, the threat of provable fraud prevents most cheating.

Committee-based verification uses multiple independent parties running the same model. If they agree, output is accepted. Disagreement triggers investigation. Distributed trust without full ZK overhead.

TEE-attested computation runs models inside secure enclaves like Intel SGX or AWS Nitro. Hardware attests to correct execution. Not as trustless as ZK proofs, but much more practical today.

What's still missing

Several unsolved problems remain.

Model provenance is unclear. Even if you prove inference was correct, how do you prove the model itself is legitimate? Who trained it, on what data, with what objectives? ZK proofs verify computation, not intent.

Model updates create governance challenges. AI models need retraining. Who decides when to update on-chain models? How do you migrate from v1 to v2 without disruption? Traditional governance is too slow for ML iteration.

Privacy versus verifiability trade off. Proving correct inference might require revealing model weights or input data. Some applications need both privacy and verifiability — an active research area.

Adversarial robustness matters more. When model outputs trigger financial transactions, adversarial attacks become profitable. On-chain AI needs stronger robustness guarantees than typical ML.

Timeline and predictions

2025: Production deployments of simple on-chain ML. Credit scoring, risk assessment, basic classification. Proof generation still measured in minutes, limiting to batch applications.

2026-2027: Hardware acceleration for ZK proofs makes larger models practical. GPUs and ASICs designed for proof generation. Real-time inference verification becomes possible for mid-sized models.

2028+: Large language models become provable. On-chain AI agents with verifiable reasoning. The distinction between smart contracts and AI systems blurs.

This timeline assumes continued progress in both ZK technology and AI efficiency. Either could accelerate or delay the convergence.

But the direction is clear. Blockchains need intelligence. Intelligence needs verifiability. The merger is inevitable — only the timing is uncertain.

Viktor Semenov researches verifiable machine learning and contributes to Modulus Labs' ZK-ML infrastructure. He previously worked on cryptographic systems at Trail of Bits.

#Crypto


Related posts

Next-gen wallets: Account Abstraction explained
DAO governance: Why voting doesn't work
Scroll down to load next post