AI-Generated Images in News: The Ethics Nobody's Figured Out Yet


Last month, a major American newspaper used an AI-generated image to illustrate a story about homelessness. The image depicted a realistic-looking scene of a homeless encampment that never existed.

The backlash was immediate and fierce. Critics argued the newspaper had fabricated visual evidence for a real news story. Defenders said it was clearly labeled and no different from stock photography.

Welcome to the ethics of AI images in journalism—a question nobody has satisfactorily answered.

The Current State

AI image generation has improved dramatically. Tools like Midjourney, DALL-E, and Stable Diffusion can produce photorealistic images that are increasingly difficult to distinguish from photographs.

Newsrooms are using these tools in various ways:

Illustration. Creating images for opinion pieces, feature stories, or abstract concepts where photography isn’t possible or appropriate.

Visualization. Depicting historical events, future scenarios, or hypothetical situations.

Enhancement. Improving or completing real photographs—extending backgrounds, adjusting composition.

Stock replacement. Using AI-generated images instead of purchasing stock photography for generic illustrations.

Some of these uses seem relatively uncontroversial. Others raise serious concerns.

Where the Lines Blur

The ethical questions aren’t always obvious. Consider these scenarios:

Scenario 1: An AI image illustrates an op-ed about climate anxiety. The image is clearly stylized and symbolic. Few would object.

Scenario 2: An AI image shows what a proposed development might look like. It’s labeled as a “conceptual rendering.” This seems acceptable—architectural renderings have always been speculative.

Scenario 3: An AI image illustrates a news story about housing affordability, showing a generic suburban street scene. No real street is depicted, but viewers might assume it’s a photograph. Is this problematic?

Scenario 4: An AI image shows a realistic homeless encampment for a story about homelessness. It’s labeled as AI-generated, but it depicts something that could be mistaken for documentation of reality. This is where it gets concerning.

Scenario 5: An AI tool is used to “enhance” a real photograph—extending the background or adjusting elements. The line between enhancement and fabrication becomes very thin.

The further toward realistic depiction of real-world events, the more serious the ethical concerns become.

The Core Tension

Journalism depends on accurate representation of reality. Photographs serve as evidence—documentation that something happened, that someone was somewhere, that a scene existed.

AI-generated images undermine this evidentiary function. They can depict things that never existed with convincing realism. Used in news contexts, they blur the line between documentation and illustration.

This matters because trust is journalism’s most valuable asset. Audiences need to believe what they see in news coverage. Once that trust erodes—once readers wonder whether any image might be AI-generated—the relationship between journalism and audience is damaged.

What Major Outlets Are Doing

I surveyed policies at about two dozen major news organizations. The approaches vary:

Prohibition: Some outlets (particularly wire services) prohibit AI-generated images in news coverage entirely. Real photos only.

Labeling requirements: Many outlets require clear labeling of any AI-generated or AI-enhanced images, typically with a caption or badge.

Context-dependent: Some allow AI images for illustration (opinion, features) but not for news coverage.

Case-by-case: Others have no clear policy, leaving decisions to individual editors.

The wire services’ prohibition makes sense given their role—they’re providing raw material for other outlets, and trust in authenticity is paramount.

For other publishers, the labeling approach is most common. But labeling has problems.

Why Labeling Isn’t Enough

The argument for labeling: “As long as we tell people it’s AI-generated, we’re being transparent.”

But research suggests labeling is less effective than we’d hope:

  • Many readers don’t notice labels, especially in social media contexts
  • Labels don’t prevent initial misimpression—by the time you read the label, you’ve already processed the image
  • Screenshot sharing strips labels away, leaving just the image
  • The line between “AI-generated” and “AI-enhanced” is unclear to most readers

Labeling is necessary but not sufficient. It doesn’t address the deeper question of whether certain uses are appropriate regardless of labeling.

A Framework for Decision-Making

Based on my research and conversations with editors, here’s a framework for thinking about AI image use:

Consider the function. Is the image serving as evidence, illustration, or visualization? Evidence (documenting reality) is the most problematic use. Illustration (representing concepts symbolically) is the least problematic.

Consider the realism. Clearly stylized images are less likely to be mistaken for documentation. Photorealistic images of real-world scenes risk misleading viewers.

Consider the subject. Images depicting vulnerable populations, crime scenes, or politically charged situations carry higher risks of harm if misperceived.

Consider the context. News coverage demands higher standards than feature or opinion content.

Consider the alternatives. If real photography is available and appropriate, use it. AI images shouldn’t replace authentic documentation when documentation is possible.

Developing Organizational Policy

If you’re developing AI image policies for a newsroom, here’s what I’d recommend:

Start with a clear prohibition. Default to “no AI-generated images in news coverage” and create exceptions from there.

Create categories of use. Different rules for news versus opinion versus features makes sense.

Require review. Any AI image use should require editorial sign-off, not just reporter discretion.

Mandate robust labeling. Consistent, visible, persistent identification of AI images.

Plan for emerging challenges. AI enhancement of real photos, deepfake detection, evolving tools—your policy will need updating.

Document decisions. Keep records of why AI images were used when they were, for institutional learning.

Some newsrooms are getting AI implementation help to develop comprehensive policies that address not just images but all AI applications in the newsroom.

The Broader Concern

The AI image question is part of a larger challenge: maintaining trust in an era when synthetic media is increasingly sophisticated and pervasive.

Deepfake video, AI-generated audio, synthetic text—all of these raise similar questions about authenticity and trust. Journalism’s response to AI images will set precedents for these other challenges.

The stakes are high. If audiences lose confidence that what they see in news coverage is real, journalism’s democratic function is compromised.

What Individual Journalists Should Do

While waiting for clearer organizational policies, individual journalists should:

Err toward caution. When in doubt, don’t use AI-generated images for news coverage.

Document everything. Keep records of when and why AI images are used.

Be transparent. If you use AI images, make that absolutely clear to audiences.

Question necessity. Before using an AI image, ask whether it’s actually needed or whether the story works without it.

Stay current. This technology is evolving rapidly. Understanding what’s possible helps inform ethical decisions.

Looking Forward

The ethics of AI images in journalism will continue evolving. The technology is improving faster than our ethical frameworks can adapt.

My prediction: we’ll see increasingly strict policies from major outlets as the risks become clearer. The competitive pressure to use AI images will conflict with the institutional imperative to maintain trust.

The organizations that establish clear principles early—and stick to them—will be better positioned as these challenges intensify.

Working with partners who understand both technology and journalism ethics—like Team400’s AI team—can help organizations develop robust frameworks before crisis forces reactive decisions.

The question isn’t whether to engage with AI image technology. It’s whether to engage thoughtfully or reactively.


I’m collecting examples of AI image policies from newsrooms. If your organization has developed guidelines, I’d love to see them.