Opinion: Newsrooms Are Hiring the Wrong AI Roles


I’ve been watching media AI job postings for two years now. The patterns are concerning.

Newsrooms are hiring for AI roles—which is good. But they’re often hiring the wrong roles, with wrong expectations, for wrong purposes. This creates expensive mismatches that hurt both organizations and the people they hire.

Here’s what I’m seeing go wrong.

The Machine Learning Engineer Mistake

Many newsrooms post for “Machine Learning Engineer” or “AI Engineer” with expectations that don’t match the role.

The job description asks for:

  • Building and training custom models
  • Deep learning implementation
  • Natural language processing development
  • Neural network architecture

But what the newsroom actually needs is:

  • Evaluating and implementing existing AI tools
  • Integrating APIs into editorial workflows
  • Training staff on AI applications
  • Connecting AI outputs to publishing systems

These are entirely different skill sets. ML engineering is about building models from scratch. Most newsrooms don’t need that—they need people who can work with existing models effectively.

The result: they hire expensive ML engineers who are overqualified for the actual work, frustrated by limited model development, and underequipped for the integration and training work that’s actually needed.

Or they can’t fill the role at all because they can’t compete on salary with tech companies who actually need ML engineers.

The Innovation Theater Hire

Some organizations hire AI roles for signaling rather than substance.

The role comes with an impressive title—“Head of AI Strategy,” “Chief AI Officer,” “Director of Artificial Intelligence.” The press release celebrates the hire.

But there’s no budget, no authority, and no clear mandate. The hire is expected to transform operations through force of personality alone.

These roles fail predictably. The person can’t succeed without resources. The organization doesn’t get value from the hire. Everyone ends up disappointed.

If you’re creating an AI leadership role, ensure it has:

  • Meaningful budget for tools and implementation
  • Authority to make binding decisions
  • Clear mandate with measurable objectives
  • Support from senior leadership

Without these, you’re setting someone up to fail.

The External Expert Fallacy

Some newsrooms hire AI expertise with no journalism background, assuming technical skills transfer.

They bring in someone from tech—maybe a FAANG veteran or AI startup founder—expecting that AI knowledge will translate directly to newsroom applications.

Sometimes it works. Often it doesn’t.

Newsroom AI requires understanding both domains. Technical people without journalism context don’t understand:

  • Editorial workflows and why they’re structured as they are
  • Journalist concerns about AI’s impact on their work
  • The specific ethical constraints on newsroom AI use
  • What problems actually need solving versus theoretical possibilities

The best AI hires I’ve seen either come from within journalism (with technical learning) or from tech with genuine engagement in media (not just resume interest).

What Newsrooms Actually Need

Based on my observation, here’s what most newsrooms actually need for AI:

The Translator. Someone who understands both technology and journalism, who can bridge the domains. This person evaluates tools, identifies applications, and helps journalists understand what’s possible. They don’t need to build models—they need to understand them and communicate effectively.

The Implementer. Someone technically skilled enough to integrate AI tools with existing systems. This is engineering work, but application engineering rather than ML research. Python proficiency, API integration, database work—practical skills for making things work together.

The Trainer. Someone who can teach journalists to use AI tools effectively. Training skills, patience, understanding of adult learning. Often this is better as an internal promotion than external hire.

The Policy Person. Someone to develop and enforce AI guidelines. This might be existing editorial leadership with additional responsibility rather than a dedicated hire.

Notice what’s not on this list: researchers who build novel models. That’s appropriate for AI labs, not newsrooms.

The Salary Reality Check

Newsrooms often price AI roles unrealistically.

They post ML Engineer roles at journalism salaries, expecting to compete with Google and OpenAI. It doesn’t work.

The reality: if you want genuine ML research talent, you need to pay technology company rates—often 2-3x what newsrooms typically offer for technology roles.

But here’s the good news: you probably don’t need that talent. The roles you actually need—integrators, trainers, translators—command lower salaries and are more findable.

Be honest about what you’re willing to pay and what that budget actually buys.

The Internal Development Path

Often, the best AI hires are internal.

Journalists who’ve become interested in technology. Digital producers who’ve been experimenting with AI tools. Data journalists looking to expand scope.

These people already understand your organization, your workflows, your culture. They have journalist credibility that external hires must build.

The investment in training them on AI tools may be more valuable than external hires who need journalism training.

Consider structured development paths for interested internal staff before posting external roles.

Getting External Help Right

When external AI expertise is needed, consider whether hiring is the right approach.

team400.ai can provide expertise for specific projects without permanent headcount. This is often more cost-effective than hiring for:

  • One-time strategic assessments
  • Specific technical implementations
  • Training program development
  • Policy creation

Permanent hires make sense for ongoing operational needs. Project-based work may be better served by project-based engagement.

The Job Description Test

Before posting an AI role, test your job description:

Specificity check. Does it describe specific work products and outcomes? Or vague aspirations like “drive AI transformation”?

Skill alignment. Do the required skills match the actual work? Or are you asking for ML research when you need integration?

Salary reality. Is the compensation competitive for the skills required? Have you checked market rates?

Success criteria. How will you know if the person succeeds? What will they have accomplished in one year?

Authority clarity. What can this person actually decide? What resources do they control?

If you can’t answer these clearly, you’re not ready to hire.

A Better Approach

Here’s what I’d recommend to newsrooms building AI capability:

Start with an audit. Understand what you actually need before hiring. What problems need solving? What skills are you missing?

Define specific projects. Have concrete initiatives for AI roles to work on, not just general mandates.

Consider internal development. Who in your organization could grow into AI roles with support?

Right-size expectations. Most newsrooms need integrators and trainers, not researchers.

Plan for support. AI hires need tools, training budget, and organizational support to succeed.

Engage expertise strategically. Use partners offering AI strategy support for project work rather than trying to hire for every need.

The Stakes

Getting AI hiring wrong is expensive—both the direct cost of mis-hires and the opportunity cost of failed initiatives.

Newsrooms have limited resources. Investing those resources in the wrong roles delays genuine AI capability development.

The organizations that get this right will build real advantages. Those that don’t will waste money and time while falling behind.

The good news: getting it right isn’t complicated. It just requires honesty about what you actually need, what you can actually pay, and what you’re actually prepared to support.

Most newsrooms don’t need AI superstars. They need practical people doing practical work. Hiring for that reality works better than aspirational job postings that attract the wrong candidates for roles that aren’t what they seem.


I’m curious about AI hiring experiences—both from organizations that have hired and people who’ve been hired. What’s working? What isn’t?