AI-Generated Images in News Media: We Need to Talk About This Before It's Too Late
Last month, a mid-tier US news outlet published a feature story about housing affordability illustrated with what appeared to be photographs of families in their homes. Except they weren’t photographs. They were AI-generated images — photorealistic, but entirely synthetic. No real people, no real homes, no real moments captured. The outlet disclosed this in small text beneath each image. Most readers didn’t notice.
This isn’t an isolated incident. It’s a pattern that’s accelerating across news media globally, and the Australian industry isn’t immune. The question isn’t whether AI-generated images will become common in news publications — they already are. The question is whether we’re going to establish meaningful ethical standards before a major credibility crisis forces us to.
The Case for AI Images in News
Let me steel-man the argument first, because it’s not entirely unreasonable.
News organisations are under relentless financial pressure. Photography is expensive. Commissioning original photojournalism for every story costs real money — photographer fees, travel, equipment, editing. Stock photography is cheaper but generic, and licensing fees add up. Many smaller publications have eliminated staff photographer positions entirely.
AI image generation costs essentially nothing per image. For a publisher producing 50 articles a day, replacing even half of their stock photo licensing with AI-generated alternatives saves meaningful money. For illustrations and conceptual imagery — the kind that accompanies opinion pieces, analysis, and abstract topics — AI generation can produce something more visually relevant than a generic stock photo of “person looking at laptop.”
There’s also an argument about representation. Stock photography has well-documented diversity problems. AI generation can create images that represent a wider range of people, settings, and contexts without the constraints of available stock libraries.
These are legitimate operational considerations. But they don’t resolve the ethical problems.
The Core Ethical Issues
Trust and authenticity. Journalism’s fundamental currency is credibility. Readers trust that what they see in a news publication corresponds to reality in some meaningful way. Photographs, even staged ones, document real people and real places. When a publication illustrates a story about bushfire recovery with an AI-generated image of a person surveying a damaged property, it creates a visual claim about reality that has no basis in fact. Even with disclosure, the emotional impact of a photorealistic image operates below the level of conscious processing — readers feel something before they read the fine print.
The Verge ran an excellent investigation into this phenomenon, documenting how readers’ recall of news stories is significantly influenced by accompanying images, regardless of whether those images are real or generated. The image becomes part of the reader’s memory of the event.
Consent and likeness. AI image generators are trained on billions of photographs of real people. When you generate an image of “an elderly woman in a regional Australian hospital,” the output is a composite of real elderly women’s faces from the training data. Those women didn’t consent to having their likenesses used as training data, let alone recombined into synthetic images illustrating news stories about hospital wait times.
This isn’t hypothetical harm. There have been documented cases of AI-generated faces bearing recognisable similarity to real individuals, creating a bizarre situation where a person’s apparent likeness illustrates a news story they have no connection to.
The slippery slope to fabrication. Once a publication normalises AI-generated imagery in some contexts (conceptual illustrations, stock photo replacements), the boundary between acceptable and unacceptable uses becomes blurry. Today it’s an illustration for an opinion piece. Tomorrow it’s a “representative image” for a news report. Next week it’s filling in a scene that a photographer couldn’t access. Each step feels incremental. The cumulative effect is the erosion of the evidentiary function of images in journalism.
What the Industry Standards Currently Say
The standards landscape is fragmented. The Society of Professional Journalists ethics code doesn’t specifically address AI-generated images — it was written for a different era. The Associated Press has issued guidance prohibiting AI-generated images in its news report, which is a clear and strong position. Reuters has similar restrictions.
In Australia, the Media Entertainment and Arts Alliance (MEAA) journalism code of ethics includes provisions about accuracy and not misleading audiences, but doesn’t specifically address AI imagery. The ABC has internal guidelines that restrict the use of AI-generated images in news content, though the details aren’t publicly available.
The problem is that these standards, where they exist, are voluntary. There’s no regulatory framework in Australia — or anywhere — that governs the use of AI-generated images in news media. The Australian Communications and Media Authority (ACMA) regulates broadcast media but has limited jurisdiction over digital publishers, and AI imagery doesn’t fall neatly into existing regulatory categories.
Where I Think the Lines Should Be
This is an opinion piece, so here’s my opinion on where the ethical lines should sit.
Hard no: news reporting. Any story that reports on actual events should not include AI-generated images presented as or resembling documentary photographs. If you’re reporting on a flood in Lismore, use real photographs or don’t use photographs. An AI-generated flood scene, no matter how clearly labelled, undermines the documentary function of news photography.
Acceptable with clear labelling: conceptual illustration. For opinion pieces, analysis, and abstract topics (technology trends, economic concepts, future scenarios), AI-generated illustrations can serve a legitimate purpose — similar to how publications use infographics, cartoons, or artist impressions. But the labelling needs to be prominent, not buried in a caption.
Grey area: feature journalism. Long-form features often use environmental photography that’s more atmospheric than documentary. An AI-generated image that illustrates a mood or concept rather than claiming to depict a real scene occupies a grey area. My view is that this should lean toward real photography, but I acknowledge reasonable people can disagree.
Absolute prohibition: depicting real identifiable people. AI-generated images that depict or could be mistaken for real public figures should never appear in news media. The potential for misinformation is too high and the reputational risk to the publication is severe.
The Disclosure Problem
Even when publications disclose AI-generated images, the disclosure often fails. Small text, inconsistent labelling, and placement that readers skip over means that the disclosure technically exists but practically doesn’t function.
If the industry is going to use AI images, the labelling standard needs to be as clear as “SPONSORED CONTENT” labels on native advertising. Visible, consistent, and unambiguous. Some publications have started using watermark-style overlays or distinct visual borders to signal AI generation. That’s better than a text caption, but we’re still experimenting.
The Bigger Picture
What concerns me most isn’t any single use of AI imagery in journalism. It’s the gradual normalisation of synthetic visual content in an industry whose credibility depends on authenticity. Each individual decision might seem reasonable. The cumulative effect is a media environment where readers can no longer trust that what they see in the news bears any relationship to what actually happened.
We’ve already lived through a decade of declining trust in media. Adding “you can’t trust the photos either” to the list of reasons people distrust journalism isn’t a minor thing. It’s potentially catastrophic for an industry that depends on public trust to function.
The time to establish clear, enforceable standards is now — before the practice becomes so entrenched that reform feels impossible. Australian media organisations, industry bodies, and regulators need to stop treating AI-generated imagery as a technology question and start treating it as a journalism ethics question. Because that’s what it is.