Opinion: Journalists Should Stop Fearing AI and Start Shaping It


I’ve grown frustrated with how the journalism community talks about AI.

The dominant tone is defensive: AI threatens our jobs, our standards, our profession. We need to resist, regulate, protect ourselves from the coming storm.

I understand this instinct. The threats are real. But the defensive crouch is also self-defeating. While journalists worry about AI, others are building it—and building it without meaningful input from people who understand journalism’s values and responsibilities.

It’s time for journalists to stop fearing AI and start shaping it.

The Current Stance

The journalism industry’s relationship with AI companies has been primarily adversarial.

Legal battles over training data dominate the conversation. Negotiations focus on extracting licensing payments. Coverage often emphasizes AI harms and risks. Industry statements tend toward warning and caution.

This isn’t wrong, exactly. Copyright concerns are legitimate. Licensing payments are appropriate. AI harms deserve coverage. Caution is reasonable.

But the adversarial stance has a cost: it excludes journalists from conversations about how AI should work.

When OpenAI, Google, or Anthropic makes decisions about how their systems handle news and information, who’s in the room? Engineers, ethicists, lawyers—but rarely working journalists who understand what quality news actually requires.

What Engagement Would Look Like

I’m not suggesting journalists abandon concerns about AI. I’m suggesting adding constructive engagement alongside justified criticism.

What could that look like?

Technical involvement. Journalists with technical skills could work inside AI companies, shaping how systems handle news, verification, and attribution. Some do this now; more could.

Advisory relationships. AI companies need guidance on how their products affect journalism. Formal advisory relationships could provide this—if journalists engage rather than just litigate.

Standards development. Industry standards for AI and news are being developed now. Journalists should be writing those standards, not watching them emerge from elsewhere.

Constructive criticism. Coverage that doesn’t just identify problems but suggests solutions carries more influence than pure opposition.

Experimentation. Journalists who understand AI through hands-on use have more credible voices than those whose understanding is purely theoretical.

The Opportunity Cost of Disengagement

Consider what happens when journalists aren’t engaged:

AI systems make decisions about news without understanding newsroom values. Training data selection reflects technical considerations, not journalistic ones. Product designs optimize for metrics that may not align with journalism’s public interest mission.

The results are predictable: AI systems that treat all content equally, that don’t understand source credibility, that generate misinformation as confidently as truth.

These aren’t failures of technology—they’re failures of guidance. The technologists building these systems often don’t understand what journalism requires. They’d benefit from journalists’ input. But that input requires engagement, not just opposition.

The Fear Isn’t Working

Here’s the uncomfortable truth: the defensive stance hasn’t protected journalism from AI disruption.

AI-generated content floods the internet. Platform algorithms continue prioritizing engagement over quality. Search is being transformed by AI summaries. Training continues regardless of licensing disputes.

The fear and opposition have been ineffective at preventing the changes journalists worried about. What they have done is exclude journalists from influencing those changes.

At some point, strategy should be evaluated based on results. The current approach isn’t producing the desired outcomes.

What Journalists Uniquely Understand

Journalists have expertise that AI development desperately needs:

Verification. How do you determine if information is true? Journalists have developed sophisticated methods over decades. AI systems are just beginning to grapple with this challenge.

Source assessment. Not all sources are equal. Journalists understand how to evaluate credibility, motivation, and reliability. AI systems largely don’t.

Context and nuance. Information without context misleads. Journalists understand how to provide context. AI systems often strip it away.

Public interest. What matters? What deserves attention? Journalists make these judgments constantly. AI systems optimize for engagement, not importance.

Ethical constraints. When shouldn’t you publish? What harms should be weighed against news value? Journalists navigate these questions; AI systems mostly don’t.

This knowledge is valuable. It could shape AI development constructively—if journalists choose to share it.

Practical Steps

For journalists considering more constructive engagement:

Develop AI literacy. You can’t shape what you don’t understand. Learn how these systems actually work—not in technical depth, but well enough to engage substantively.

Build relationships. Get to know people building AI systems. Understand their perspectives and constraints. Find common ground where it exists.

Offer solutions. When you identify problems with AI systems, also suggest solutions. “Here’s what’s wrong” is less valuable than “here’s what’s wrong and here’s how to fix it.”

Participate in standards. Industry groups are developing AI standards for media. Participate actively.

Experiment visibly. Demonstrate responsible AI use in journalism. Show what good practice looks like.

The Collaboration Worry

I know some journalists will read this as advocating capitulation—getting co-opted by tech companies, legitimizing systems that harm journalism.

That’s a real risk. Engagement should be clear-eyed about power dynamics and maintain appropriate skepticism.

But disengagement has its own risks—and we’re living them now. AI is being developed without journalist input. The results affect journalism regardless.

The choice isn’t between engagement and protection. It’s between engagement and irrelevance.

A Different Future

Imagine a different relationship between journalism and AI:

AI systems that understand source credibility and communicate uncertainty. Tools that assist investigative work while respecting ethical constraints. Discovery systems that surface quality journalism rather than burying it.

This future isn’t inevitable. But it’s possible if journalists engage in building it rather than just opposing what others build.

The defensive crouch is understandable. It’s also insufficient.

Journalists should shape AI’s future, not just worry about it. The alternative is letting others shape it for us—and we’re unlikely to be happy with the results.