· Nolwen Brosson · Blog  · 6 min read

The Authenticity War: How to Prove Content Is Real

Over the past few years, we’ve hit a real turning point: an image or a video is no longer proof. Between generative AI and deepfakes, “seeing” isn’t enough anymore.

Fake content is getting more realistic, detection tools are improving more slowly, and platforms like Instagram or TikTok still aren’t ready to display clear, user-friendly proof of origin.

So the real question won’t be “is this fake?” but rather: “what makes me believe this is authentic?” We’ll get to that right after 😊

What “proving it’s real” actually means

Before talking solutions, it’s important to clarify one thing: we’re not trying to prove a scene is “true” in a philosophical sense. We’re trying to establish:

  • Who produced the content (or which tool did),
  • When and how it was produced,
  • What chain of edits it went through,
  • And whether the current file truly matches that story.

That’s the idea of provenance: a media file’s verifiable history.

Why this got so hard with AI

Visual clues are no longer enough

The classic “tells” (wrong shadows, extra fingers, artifacts, etc.) are less and less reliable. Models keep improving, and light retouching can wipe out imperfections.

Fake content blends into real content

The future isn’t only 100% synthetic media. It’s real footage that gets modified: removing an element, changing a background, adding a logo, altering what someone says in a video, etc. We discuss how AI learns to generate this kind of content in this article.

Distribution breaks the evidence

The naive solution would be to rely on metadata. But even when a file contains metadata, a simple repost, compression, or a screenshot can strip it out. That’s one of the biggest weaknesses of “metadata-only” approaches.

The 3 main types of solutions

The most reliable practice today is to combine multiple signals. There’s no magic method: it’s a bundle of technical evidence plus process.

1) Cryptographic provenance: C2PA and Content Credentials

C2PA (Coalition for Content Provenance and Authenticity) is an open standard that attaches a kind of “identity dossier” to a piece of media, cryptographically bound to the file. This lets you verify the origin and the declared edits. The file’s fingerprint (hash) helps confirm the original content wasn’t altered. In addition, the “manifest” can describe where the content comes from and what actions were performed on it.

In the ecosystem, you’ll often hear about Content Credentials: the implementation / “human-readable format” of that information, sometimes presented as a “nutrition label” for content (who made it, captured vs generated, editing tools, history, etc.).

What it provides:

  • A verifiable proof of origin (signature),
  • A declared and traceable editing history,
  • A common language across tools (capture, editing, publishing).

What it doesn’t solve:

  • If content leaves the chain (screenshot, aggressive re-encoding), the proof can disappear.
  • It proves the history of a file, not automatically the “truth” of the scene. For example, if someone plays a deepfake on a screen and films it with a “trusted” camera, C2PA will say “captured by this camera,” but the filmed “reality” is already manipulated.

Worth noting: adoption is moving forward among toolmakers and manufacturers. Some cameras and organizations are pushing provenance “from capture,” whereas today C2PA is often added at export or during editing. Even OpenAI says it includes C2PA metadata for certain images generated in ChatGPT.

2) Detectable watermarks: the SynthID example

Another approach is embedding an invisible watermark into AI-generated content: a signal you can’t see, but tools can detect. Google DeepMind describes SynthID as a watermark designed to survive common transformations (compression, cropping, filters).

What it provides:

  • Useful to identify AI content coming from a given ecosystem.
  • Scalable: you can analyze large volumes of media automatically.

Limitations:

  • If content is generated by another model that doesn’t watermark: no signal.
  • Some transformations can degrade detection.
  • It’s not 100% reliable. It says “likely AI,” not “true/false.”

3) Forensics (artifact analysis) and “model-based” detection

The third family includes analysis tools that look for inconsistencies (noise patterns, compression artifacts, motion, lighting, generation traces, etc.). This approach is actively evaluated in public programs (for example, NIST) to measure detector robustness against realistic content.

What it provides:

  • Works even if the content has no metadata.
  • Can detect partial manipulations.

Limitations:

  • What works today may fail tomorrow.
  • “Manipulation probability” scores must be interpreted carefully (false positives / false negatives).

The right mindset: move from a “test” to a “trust workflow”

If you’re a brand, a media company, or simply a team moderating content, the challenge isn’t to have one tool. It’s to have a clear path: capture → storage → editing → publishing → verification.

Operational checklist to prove your content is authentic

1) Require provenance when it matters

For press, corporate, or legal content: prefer sources and tools that preserve or produce C2PA.

2) Secure capture as close to the sensor as possible

When you need strong evidence (insurance claims, inspections, visual KYC, compliance), “authenticated capture” solutions exist. They don’t just analyze media after the fact; they secure the moment of capture. In practice, the device or application computes a fingerprint (hash) at capture time, ties it to a timestamp and an identity (device / organization), then signs that information so later changes become detectable. To achieve “authenticated capture” (strong proof from the source), you typically need either hardware that signs at capture time or a secure capture app that adds attestation + signature and then (often) exposes it via C2PA / Content Credentials.

3) Keep the original and track versions

Always store the original (hash, timestamp, restricted access). Version exports and preserve metadata (avoid destructive re-exports “out of habit”).

4) Make the result visible

The real issue isn’t the lack of standards. It’s that end users often see nothing. A badge, an “about this image” panel, a short origin summary… this changes everything, especially for viral content.

Conclusion

What is most likely to win

  • An interoperable provenance standard (C2PA / Content Credentials), because it creates a shared language across capture, editing, and platforms.
  • Watermarks in major content generation models.
  • A multi-layer approach (provenance + detection + process).

What won’t work

  • “One AI solution that detects all fakes.”
  • Metadata alone.

If you publish visuals (brand), validate evidence (insurance, marketplace), or moderate content (platform), you’ll need to treat authenticity as a product building block: proof of origin, traceability, clear display, and risk management. It’s complex, but it may become the key difference between those who integrate it now, and those who don’t.

    Share:
    Back to Blog