Do Media AI Ethics Boards Actually Work? An Assessment
Newsrooms love creating committees.
When AI emerged as a significant operational question, the response was predictable: AI ethics boards, AI governance committees, AI advisory groups. Impressive names. Senior appointments. Announced with press releases.
But do they actually work?
I’ve observed AI governance structures at about a dozen news organizations over the past two years. Some function effectively. Many are window dressing. Here’s what separates the two.
The Typical Model
Most newsroom AI governance structures look similar:
- A committee of senior leaders, sometimes with external experts
- Periodic meetings (monthly or quarterly)
- A mandate to review AI policies, oversee implementation, address ethical questions
- Often reporting to the editor-in-chief or similar senior figure
On paper, this sounds reasonable. In practice, effectiveness varies enormously.
What Doesn’t Work
Several patterns predict failure:
Meeting Without Mandate
Some committees meet, discuss AI topics generally, and adjourn without making decisions or taking action.
They review trends. They share concerns. They acknowledge complexity. But nothing actually changes as a result.
These committees satisfy the organizational need to be “doing something about AI” without actually doing anything. They’re governance theater.
Senior But Disconnected
Another failure mode: committees composed of very senior people who don’t actually use AI tools.
The executive editor, the general counsel, the chief technology officer—impressive titles, but often disconnected from how AI is actually being used in the newsroom.
These committees make policy based on theoretical concerns rather than operational reality. Their guidance often proves impractical.
Policy Without Enforcement
Some committees create policies that go unenforced.
They publish guidelines. They announce standards. But nobody checks whether the guidelines are followed. Nobody enforces consequences when they’re violated.
Policy without enforcement is worse than no policy—it creates false confidence that risks are managed when they aren’t.
Reactive Only
Committees that only meet when problems arise provide inadequate governance.
By the time a problem requires committee attention, the damage is often done. Reactive governance can’t prevent issues—it can only respond to them.
Insufficient Technical Understanding
Ethical AI governance requires understanding what AI can and can’t do.
Committees composed entirely of journalists and lawyers often lack the technical fluency to evaluate AI applications meaningfully. They can’t distinguish between low-risk and high-risk uses because they don’t understand the technology.
What Works
The effective committees I’ve observed share different characteristics:
Clear Decision Authority
Effective committees have explicit authority to make binding decisions.
They can approve or reject AI implementations. They can set policy that must be followed. They have enforcement mechanisms.
This authority must be granted from organizational leadership and respected by staff. Without it, governance is advisory at best.
Operational Connection
Effective committees include people who actually use AI in their work.
A beat reporter who uses AI research tools. An editor who works with AI-assisted content. A producer who integrates AI into workflow.
These operational representatives ground committee discussions in reality. They can explain what’s actually happening, not just what should happen in theory.
Proactive Review
Effective committees review AI implementations before deployment, not after problems emerge.
They create evaluation frameworks. They assess proposed uses against criteria. They identify risks before those risks materialize.
Proactive governance prevents problems rather than just responding to them.
Technical Expertise
Effective committees include or access meaningful technical expertise.
This might be internal—a data scientist, an AI engineer. Or it might be external—advisors with technical background, these AI specialists who understand both journalism and technology.
Technical fluency enables meaningful evaluation rather than superficial review.
Regular Cadence
Effective committees meet regularly regardless of crisis.
Monthly meetings create rhythm. Regular review of ongoing implementations catches drift. Consistent attention signals organizational priority.
Crisis-only meetings suggest AI governance isn’t actually important until something goes wrong.
Documentation
Effective committees document their decisions and reasoning.
Records create accountability. They enable learning from past decisions. They provide evidence of good-faith governance if problems arise.
Undocumented deliberation is hard to evaluate and impossible to learn from.
Building Effective Governance
If you’re establishing or improving AI governance, here’s what I’d recommend:
Define authority clearly. What can the committee decide? What requires executive approval? What happens if committee guidance is ignored?
Staff appropriately. Include operational perspectives, not just senior leadership. Access technical expertise somehow.
Establish regular process. Monthly meetings. Standing agenda items. Systematic review of implementations.
Create evaluation frameworks. What criteria determine whether an AI use is acceptable? Document them.
Build enforcement. What happens when policies are violated? Make consequences real.
Connect to operations. Governance that exists separately from daily work will be ignored. Integrate governance into workflows.
Plan for evolution. AI capabilities change rapidly. Governance structures need regular revision.
The Resources Question
Effective governance requires resources—time, expertise, attention.
For smaller newsrooms, full committees may be impractical. Alternatives include:
Individual responsibility. A single senior person with AI governance authority, supported by external expertise when needed.
Consortia. Multiple smaller newsrooms sharing governance resources—common policies, shared experts.
External support. Working with partners who understand AI governance. Firms providing AI implementation help can offer governance frameworks and expertise that smaller newsrooms can’t develop internally.
The right approach depends on scale and resources. But some governance is better than none.
Measuring Effectiveness
How do you know if governance is working?
Decisions made. Is the committee actually making decisions, not just discussing?
Decisions followed. Are committee decisions implemented? What happens when they’re not?
Problems prevented. Are you catching issues before they cause harm?
Staff awareness. Do journalists know the policies and understand them?
External credibility. Would your governance survive scrutiny from critics?
If governance isn’t producing measurable outcomes, it isn’t working.
The Honest Assessment
Most newsroom AI governance I’ve observed isn’t effective.
It exists because organizations feel they should have governance. It satisfies that feeling without actually managing risk.
This isn’t universal—some committees function well. But they’re the minority.
The gap between governance in theory and governance in practice is significant. Closing that gap requires commitment that many organizations haven’t made.
Why It Matters
Ineffective governance creates real risks:
Reputation damage. When AI use goes wrong and governance failed to prevent it, organizational credibility suffers.
Legal exposure. Inadequate oversight of AI could create liability in copyright, privacy, or other areas.
Ethical failure. Governance exists to prevent harm. Failed governance enables harm.
Staff uncertainty. Journalists unclear on what’s permitted make worse decisions than those with clear guidance.
Getting governance right matters—for the organization, for journalists, and for the audiences who depend on trustworthy news.
A Call for Honesty
My request to news organizations: assess your AI governance honestly.
Is your committee making real decisions with real authority?
Are those decisions based on operational reality and technical understanding?
Is there enforcement when decisions are ignored?
If not, your governance is theater. Better to acknowledge that than pretend otherwise.
Effective governance takes work. If you’re not willing to do that work, be honest about it.
I’m collecting case studies of newsroom AI governance—what works, what doesn’t, and why. If you’re involved in governance at your organization, I’d welcome your perspective.