Newsroom Automation Is Moving Beyond Copy: It's Starting to Touch Editorial Judgment
For the past few years, AI in newsrooms was comfortably contained. It handled things editors were happy to delegate: transcription, headline A/B testing, comment moderation, automated sports scores, and earnings report summaries. Useful, time-saving, but fundamentally mundane. No editor lost sleep over an AI writing up a cricket scorecard.
That’s changing. The latest wave of AI tools being adopted by media companies isn’t just automating production tasks. It’s starting to inform — and in some cases, make — editorial decisions. What stories to cover. How to prioritise coverage. Where to assign resources. What angle to take.
This is a fundamentally different kind of automation, and the industry needs to talk about it more honestly than it currently is.
Where the Line Has Moved
Story discovery. Several newsrooms are now using AI systems that monitor social media trends, government releases, court filings, company announcements, and other data sources to identify potential stories. This isn’t new — tools like Newswhip and CrowdTangle (before Meta shut it down) have been doing this for years. What’s new is that the latest tools don’t just surface trending topics; they assess newsworthiness. They predict which stories will generate the most reader engagement, which align with the publication’s editorial focus, and which represent genuine public interest versus ephemeral noise.
When an AI system tells a news desk “this government report about infrastructure spending is likely to be a top-performing story for your audience,” it’s making an editorial judgment about newsworthiness. A human still decides whether to cover it, but the AI is shaping the information the human decides from.
Resource allocation. At least two major Australian publishers are experimenting with AI-assisted resource allocation — systems that recommend which reporters should be assigned to which stories based on their expertise, availability, and historical performance on similar topics. One custom AI development project I’ve heard about goes further, suggesting optimal story length, multimedia elements, and publication timing based on audience behaviour data.
Angle selection. This is the most concerning development. AI tools that analyse competing coverage and audience engagement data to suggest which angle a story should take. “Coverage of this policy announcement that focuses on the impact on small business performs 3x better with your audience than coverage focusing on macroeconomic implications.” That’s data-driven editorial guidance, and it pushes uncomfortably close to letting audience metrics dictate coverage priorities.
Why Some Editors Welcome This
Let me steelman the case, because it’s not entirely without merit.
Newsrooms are under unprecedented resource pressure. Teams are smaller than they were a decade ago. The volume of information that could potentially be covered has exploded. An editor making morning assignment decisions is juggling more inputs than a human can reasonably process — wire feeds, social trends, competitor coverage, reader feedback, analytics data, reporter availability, ongoing investigation priorities.
AI tools that synthesise some of this information and present options are genuinely useful. They can surface stories that would otherwise be missed because no human was monitoring that particular government database or court registry. They can identify connections between seemingly unrelated events that suggest a bigger story. They can flag when a publication is under-covering a topic that its audience cares about.
The editors I’ve spoken to who use these tools insist they don’t follow the AI’s recommendations blindly. They describe it as “having a very well-informed intern who reads everything and makes suggestions.” The final decision remains human.
Why Other Editors Are Deeply Uncomfortable
The counter-argument is equally compelling.
Editorial judgment is the thing that distinguishes journalism from content. It’s the decision to cover an important but unsexy story about a water utility’s financial mismanagement instead of the engaging but ultimately trivial story about a celebrity restaurant opening. It’s the choice to assign your best reporter to investigate a tip that might lead nowhere rather than deploying them on the guaranteed-traffic topic the data says readers want.
When AI tools optimise for engagement — which they overwhelmingly do, because that’s the metric that’s easiest to measure — they push coverage toward what audiences want to read rather than what they need to know. These aren’t always the same thing. The entire concept of editorial judgment rests on the idea that professional journalists sometimes know better than audience metrics what stories matter.
The Guardian’s editorial guidelines explicitly state that coverage decisions should be driven by editorial judgment, not audience data. But how long does that position hold when the AI-assisted newsroom down the road is growing faster because its coverage decisions are better aligned with audience demand?
The Transparency Problem
Here’s what worries me most: readers don’t know this is happening. When a publication runs a story, readers assume a human editor decided it was newsworthy. They assume a human reporter chose the angle based on professional judgment. They assume the decision about what to cover and what to ignore reflects human editorial values.
If AI systems are shaping these decisions — even partially, even as one input among many — that should be disclosed. Not because AI involvement is inherently bad, but because readers deserve to understand the forces shaping the information they consume.
Most publications using AI in editorial decision-making haven’t disclosed it. Some argue disclosure isn’t necessary because humans still make the final call. But if that’s the standard, then we’d also say a publication doesn’t need to disclose that its major advertiser influenced coverage decisions, as long as an editor technically signed off. The influence matters, not just the final signature.
What Should Happen
The media industry needs a serious conversation about AI’s role in editorial judgment. Not the hand-wringing kind where everyone agrees it’s complicated and then goes back to what they were doing. A practical conversation that results in standards.
Minimum standards for disclosure when AI systems influence editorial decisions. Clear policies about which editorial functions can be AI-assisted and which can’t. Regular audits of whether AI-driven coverage priorities are diverging from editorial mission. And genuine investment in ensuring that AI tools are designed to support editorial judgment rather than replace it.
This isn’t about rejecting technology. AI tools that help editors make better-informed decisions are genuinely valuable. But there’s a difference between better-informed decisions and data-driven decisions, and the media industry hasn’t figured out where the line should be.
We’d better figure it out before the tools make the decision for us.