AI content detection: Can blockchain prove what's real?

Jennifer Walsh · 15.01.2026, 20:11:08

AI content detection: Can blockchain prove what's real?


Interview with Jennifer Walsh | Digital Authenticity Researcher | Co-founder of Verity Protocol

We're drowning in synthetic content. AI-generated images flood social media. Chatbots write articles. Deepfakes impersonate politicians. The line between real and artificial has blurred beyond recognition. Jennifer Walsh believes blockchain-based provenance is our best hope for restoring trust.

2049.news: How bad is the synthetic content problem?

Jennifer Walsh: Worse than most people realize.

Current estimates suggest 10-15% of images shared on major social platforms are AI-generated. For some categories — like product photos, profile pictures, news imagery — the percentage is much higher. We're approaching a point where synthetic content is the norm, not the exception.

Text is even harder to track. AI-written articles, reviews, comments — they're everywhere. Amazon reviews, news aggregators, social media posts. The volume makes human verification impossible.

The deeper problem is trust erosion. When anything could be fake, people start disbelieving everything. Real evidence gets dismissed as AI. Genuine photos get questioned. The epistemic foundation of shared reality is cracking.

2049.news: Why don't current detection methods work?

Jennifer Walsh: It's an arms race that detection is losing.

AI detection tools look for statistical patterns that distinguish synthetic from real content. Artifacts in images, unusual word patterns in text, inconsistencies in audio. These work against older generators.

But generators improve. Each new model produces more realistic output. The artifacts get subtler. The patterns become indistinguishable from human-created content. Detection accuracy that was 95% last year might be 70% this year, 50% next year.

Worse, detection itself provides training signal. Researchers publish detection methods. Generator developers use those methods to identify and eliminate detectable artifacts. The detection literature is essentially a roadmap for better fakes.

At the limit, there's no statistical difference between perfect synthetic content and real content. Detection based on content analysis alone is fundamentally doomed.

2049.news: So what's the alternative?

Jennifer Walsh: Provenance over detection. Instead of asking "is this content real?" we ask "where did this content come from?"

The core idea: verify the origin and chain of custody, not the content itself. A photo taken by a verified camera, uploaded by a verified user, stored with verified integrity — that's trustworthy regardless of whether it "looks" real or fake.

This is how physical evidence works. We don't chemically analyze whether a document is genuine — we verify chain of custody, signatures, official stamps. Digital content needs the same framework.

Blockchain provides the infrastructure. Immutable records of creation, timestamping, and transfer. No central authority to corrupt. Cryptographic verification that anyone can perform.

2049.news: How does your protocol actually work?

Jennifer Walsh: Multiple layers working together.

At capture, verified devices sign content with hardware-based keys. Camera manufacturers like Sony and Canon are implementing this. The image includes cryptographic proof that it came from a specific physical device at a specific time and location.

At creation, AI generators can optionally sign their output. This sounds counterintuitive — why would generators prove their content is synthetic? But legitimate uses benefit from transparency. Stock image platforms, creative tools, research applications — they want authenticity metadata.

At distribution, content gets registered on-chain. Hash of the content, source signature, timestamp, optional metadata about edits or transformations. Anyone can verify this record against content they encounter.

At consumption, verification tools check provenance. Browser extensions, app integrations, platform features. "This image has verified provenance from Reuters photographer X, captured at location Y on date Z." Or "This image has no provenance record — origin unknown."

2049.news: What about content that's been edited or transformed?

Jennifer Walsh: This is where it gets sophisticated.

Not all edits invalidate authenticity. Cropping, color correction, compression — these are normal parts of content distribution. A newspaper cropping a photo doesn't make it fake.

Our protocol tracks transformations. Each edit gets recorded with its own signature. You can see the full history: original capture, cropped by editor A, color-corrected by system B, published by outlet C. The chain is preserved.

Significant modifications trigger different classifications. AI enhancement, face swapping, content addition or removal — these create new provenance records that reference but are distinct from the original. The content is marked as "derived from" rather than "identical to" the source.

Users can then decide what level of modification they trust for their purposes. Journalism might require minimal editing. Entertainment might accept heavy modification. The provenance provides information; humans decide what to do with it.

2049.news: This requires massive adoption to work. How do you get there?

Jennifer Walsh: Multiple adoption paths, simultaneously.

Hardware manufacturers are motivated. Canon, Sony, Nikon face a world where their cameras' output is indistinguishable from AI generation. Authenticated capture hardware becomes a competitive advantage. They're implementing C2PA standards — the technical foundation we build on.

News organizations need credibility. When every photo could be fake, verified provenance becomes essential for journalism. Major outlets including BBC, New York Times, and Reuters are piloting provenance systems. They want proof their content is genuine.

Platforms face regulatory pressure. The EU's Digital Services Act, various US state laws — they're requiring platforms to address synthetic content. Provenance checking offers a scalable compliance mechanism.

Enterprise has immediate needs. Contracts, legal evidence, financial documents — these require authenticity guarantees. Businesses will pay for verified provenance on high-stakes content.

Consumer adoption follows infrastructure. Once cameras sign, platforms check, and news organizations verify — consumers get provenance indicators without doing anything. The system works in the background.

2049.news: What about privacy concerns? Doesn't tracking all content create surveillance infrastructure?

Jennifer Walsh: Legitimate concern, and we've designed around it.

Provenance is opt-in, not mandatory. You can take photos without signing them. You can share content without registration. The system provides authenticity for those who want it, not surveillance for everyone.

Verification is cryptographic, not database-lookup. You don't query a central server that logs your activity. You verify locally using public blockchain data. No one knows what you're verifying.

Anonymity and provenance can coexist. A whistleblower can prove their photo was taken at a specific location and time without revealing their identity. The cryptographic attestation proves facts without exposing the person.

We're not building identity tracking. We're building content authenticity. These are different things, and keeping them separate is a core design principle.

2049.news: What's the realistic timeline for this mattering?

Jennifer Walsh: Already starting, years from mainstream.

2025: First major camera lines ship with built-in content authentication. Early adopter news organizations deploy verification workflows. Blockchain provenance registries reach production scale.

2026-2027: Major social platforms integrate provenance checking. Regulatory requirements take effect. Consumer awareness of content authenticity grows. "Verified" badges become meaningful.

2028+: Provenance becomes expected for serious content. Unverified content carries presumption of uncertainty. The default flips from "assume real" to "verify before trusting."

This won't eliminate fake content. It will create a credibility gradient. Verified content becomes more trusted. Unverified content becomes more suspect. People can make informed judgments.

That's the realistic goal — not eliminating fakes, but restoring the ability to identify authenticity. Rebuilding the foundation for trust in a post-AI world.

Jennifer Walsh researches digital authenticity and co-founded Verity Protocol, a blockchain-based content provenance system. She previously led trust and safety at Twitter and holds a PhD in Information Science from Berkeley.

#AI


Related posts

AI agents in crypto: Automating the future of trading
AI and KYC: Deepfakes are breaking identity verification
Scroll down to load next post