If you've used OpenAI's image generation tools recently, every picture you've created carries an invisible tag. Not just metadata that can be stripped with a screenshot, but something baked into the pixels themselves. Now, a researcher on X claims to have extracted that watermark from GPT Image 2 outputs, raising questions about the durability of provenance systems that the industry is betting on to solve the deepfake problem.

The account @pleometric posted images showing what they describe as the isolated watermark pattern. "If you zoom in you can see that fried look gpt images have," they wrote. The researcher noted they began investigating after noticing that these watermarks appear to steer video generation models like Wan into predictable artifacts, producing checkered patterns when watermarked images are used as input.

How OpenAI's Watermarking Works

OpenAI uses two complementary approaches to mark AI-generated content. The first is C2PA (Content Credentials), an open metadata standard developed by a coalition including Adobe, Microsoft, and the BBC. C2PA embeds cryptographically signed provenance information into image files, recording details like the creation timestamp, the generating software, and whether the content has been edited.

But metadata is fragile. A screenshot removes it. Uploading to most social platforms strips it. WhatsApp, iMessage, and Facebook all re-encode images, silently deleting any embedded credentials. That's why the second approach matters: invisible pixel-level watermarks that survive these transformations.

Google's SynthID, which is baked into Gemini and Imagen outputs, pioneered this approach. The watermark is distributed across all pixels in a way that can survive cropping, compression, and color adjustments. OpenAI has been developing similar tamper-resistant watermarking for its image models, though the company has been less explicit about the technical details than Google.

Advertisement

Reports suggest GPT Image 2 generates images with persistent tiling textures that users suspect are steganographic signatures. The company has confirmed its commitment to invisible provenance signals, but hasn't documented the specific implementation for the latest model.

Why Extracting the Watermark Matters

@pleometric was clear about the limitations of their findings. "This doesn't mean all watermarks for all image types will be the same," they wrote, "or that is their only watermarking strategy." Different resolutions, aspect ratios, and generation modes may use different patterns. The researcher also emphasized that robustness testing across variations takes time.

Still, extraction is the first step toward removal. And the implications of that cut in multiple directions.

For the researcher's stated goal of improving video generation, stripping input watermarks could eliminate unwanted artifacts from downstream models. That's a legitimate technical concern for anyone building pipelines that combine multiple AI tools.

But watermark removal also opens the door to misuse. If AI-generated images can be laundered to appear authentic, the provenance systems designed to combat misinformation and deepfakes become less effective. Regulators are already counting on these systems. The EU AI Act, effective August 2026, mandates transparency labeling for AI-generated content. C2PA combined with invisible watermarking is the recommended technical approach.

The Arms Race Problem

Watermarking has always faced a fundamental tension: robustness versus removability. A watermark strong enough to survive every transformation would degrade image quality. A watermark invisible enough to be imperceptible can, in principle, be detected and removed.

Advertisement

Research from the University of Edinburgh found that many AI fingerprinting methods achieve high accuracy on unaltered images but performance drops dramatically once the image is attacked. Simple changes like JPEG compression, resizing, or blurring can smudge the fingerprints. A University of Maryland study found that watermarks can be washed out, removed, or even added to human-generated images to trigger false detection.

This cat-and-mouse dynamic is why experts argue watermarking can't be the sole defense against synthetic media abuse. Detection classifiers, media literacy, and fact-checking remain necessary complements. As one NBC News report put it, expecting watermarks alone to solve deepfakes is unrealistic.

What Comes Next

OpenAI is retiring DALL-E 2 and DALL-E 3 on May 12, 2026, making GPT Image 2 the migration path forward. Whatever watermarking strategy the company has built into the model will become the default for one of the most widely used image generation systems in the world.

Whether @pleometric's extraction leads to practical removal tools remains to be seen. The researcher noted that more testing is needed across different image types and configurations. But the work highlights a tension that the AI industry has yet to resolve: how to build provenance systems robust enough to matter in a world where adversaries have every incentive to break them.

Tools already exist to strip C2PA metadata from AI-generated images. If pixel-level watermarks prove equally vulnerable, the industry will need to reconsider how much weight to put on technical authentication versus broader systemic solutions.