How to Use AI for Investigative Journalism Without Getting Burned
Investigative journalism and AI have an uncomfortable relationship. The potential is obvious—AI can process documents, identify patterns, and surface connections faster than any human team. But the risks are equally clear. Get it wrong, and you’ve built your investigation on a foundation of machine-generated errors.
I’ve spent the past year watching investigative teams experiment with AI, talking to editors about what’s worked and what’s blown up. Here’s a practical guide to using these tools without becoming a cautionary tale.
Where AI Actually Helps Investigations
Let’s start with what genuinely works.
Document processing at scale. When you’re dealing with thousands of pages—leaked emails, corporate filings, court records—AI can extract entities, dates, and relationships far faster than manual review. The BBC’s investigations team has talked publicly about using AI to process the Panama Papers-style document dumps that would take human teams months.
Pattern recognition in structured data. If you have a dataset—campaign contributions, corporate ownership records, permit applications—AI can identify anomalies and connections that might escape human attention. This is particularly useful for financial investigations where the story lives in the patterns.
Translation and transcription. Cross-border investigations increasingly require working in multiple languages. AI translation isn’t perfect, but it lets you identify relevant foreign-language documents for professional translation rather than hiring translators to review everything.
Source organization. Tools like NotebookLM can help keep track of complex webs of sources, documents, and interviews. When you’re six months into an investigation, remembering who said what when becomes genuinely difficult.
Where AI Will Get You in Trouble
Now for the dangers.
Fact generation. AI will confidently state things that aren’t true. I’ve seen tools “find” connections between people who have no relationship, attribute quotes that were never said, and cite documents that don’t exist. If you don’t verify every AI-generated fact against primary sources, you’re writing fiction.
Analysis and interpretation. AI can tell you that two companies share an address. It can’t tell you whether that’s a meaningful red flag or a coincidence of commercial real estate. Investigative journalism requires human judgment about what matters—AI provides data, not insight.
Source assessment. AI has no way to evaluate source credibility or motivation. It treats a press release and a court filing as equivalent. The human skill of knowing which sources to trust remains essential.
Legal risk assessment. AI doesn’t understand defamation law, privacy regulations, or the specific legal risks of your story. Legal review must remain human—the cost of getting this wrong is too high.
A Practical AI Investigation Workflow
Here’s how I’d structure an AI-assisted investigation:
Phase 1: Acquisition and Organization
When documents or data first arrive, AI can help process them into usable form.
- Use AI transcription for audio and video materials
- Run document OCR and text extraction
- Use entity extraction to identify people, organizations, dates, and places mentioned
- Create a searchable database of materials
At this stage, AI is doing mechanical work. The risk is low because you’re not relying on AI judgment—just its ability to convert formats and tag content.
Phase 2: Pattern Identification
Once materials are organized, AI can help spot patterns worth investigating.
- Search for unexpected connections between entities
- Identify documents that mention key terms or people
- Look for temporal patterns—clusters of activity, gaps, timing correlations
- Generate visualizations of relationships
This is where AI becomes more useful but also more dangerous. Every pattern AI identifies is a hypothesis, not a finding. Human journalists must verify each one against primary sources before treating it as fact.
Phase 3: Deep Dive (Human-Led)
The actual investigative work remains human.
- Interview sources
- Verify document authenticity
- Evaluate credibility
- Assess legal implications
- Make editorial judgments about newsworthiness
AI might help prepare for interviews—suggesting questions based on documents or identifying inconsistencies to probe. But the judgment calls are human.
Phase 4: Writing and Production
AI can assist with drafts but must not drive the narrative.
- Use AI to summarize background for context sections
- Generate first drafts of timelines or explainer boxes
- Create data visualizations
Everything goes through rigorous human editing. If AI misunderstands something, you need editors who can catch it.
The Verification Layer
The most important part of any AI-assisted investigation workflow is systematic verification. Here’s a protocol:
Nothing AI-generated appears in final copy without human confirmation. Every fact, quote, date, and name must be traced back to a primary source that a human has reviewed.
Document AI’s limitations in your notes. When AI makes mistakes during the investigation—and it will—record them. This helps you calibrate how much to trust the tools and documents limitations for legal review.
Build redundancy into fact-checking. Have multiple people verify AI-surfaced facts independently. The confidence that AI expresses has no relationship to accuracy.
Establish clear attribution standards. Your story should clearly indicate when AI tools were used and how. Transparency protects credibility.
Case Study: What Went Wrong
A publisher I won’t name ran an investigation last year that relied heavily on AI analysis of corporate filings. The AI identified what appeared to be a network of shell companies connected to a prominent businessperson.
The story ran. The lawyers called. It turned out several of the “connections” were based on AI misreading common names—John Smith in one document connected to a different John Smith in another. The AI was confident about matches that didn’t exist.
The retraction was embarrassing. The legal settlement was expensive. The reputational damage is ongoing.
The investigation could have worked. The underlying methodology was sound. But the verification layer failed—humans trusted AI-generated connections without confirming them against original documents.
Building AI Skills on Your Team
If you’re leading an investigative team, here’s how I’d build AI capability:
Start with low-risk applications. Transcription and translation have limited downside. Let your team develop comfort with AI tools before using them for analysis.
Invest in training. Journalists need to understand AI limitations, not just capabilities. Bring in experts to explain how these tools actually work—and fail.
Create clear policies. Your team needs written guidelines about when AI can be used, what verification is required, and how AI use must be disclosed.
Learn from others. Organizations like the Reuters Institute and Nieman Lab document AI experiments in journalism. Study both the successes and the failures. Team400.ai can also help newsrooms develop AI capabilities without the trial-and-error approach.
AI can make investigative journalism faster and more powerful. But only if we’re honest about its limitations and rigorous about verification. The technology changes; the fundamentals of good journalism don’t.