In the aftermath of a mass shooting in Minneapolis, journalists at the Minnesota Star Tribune used artificial intelligence to rapidly translate the shooter's cryptic journal. This technological assistance, combined with traditional reporting and human verification, allowed the newsroom to quickly understand the shooter's motives and background.
The incident highlights a modern use case for AI in journalism, where technology serves as a tool to accelerate information processing during a breaking news event, while human oversight remains critical for accuracy.
Key Takeaways
- The Minnesota Star Tribune used AI to translate hundreds of pages from a shooter's journal hours after an attack.
- The AI identified the text as Faux Cyrillic and provided initial translations that revealed obsessions with past massacres and guns.
- Human language experts were essential for verifying the AI's output, correcting significant errors that could have altered the story's narrative.
- The combination of AI speed and human verification allowed the newspaper to publish a comprehensive profile of the shooter within three days.
A Breaking News Crisis and a Digital Puzzle
On the morning of August 27, 2025, a shooter later identified as Robin Westman attacked the Annunciation Catholic Church in Minneapolis. The event resulted in the deaths of two children and injuries to 21 other people. Westman died at the scene from a self-inflicted gunshot wound.
As reporters covered the developing story, they discovered that Westman had released a series of videos that morning. The videos contained footage of a handwritten journal spanning hundreds of pages. The text was not in standard English, presenting a major obstacle for the news team working under a tight deadline.
The challenge was to decipher this large volume of text to understand the shooter's mindset and potential motives. This is where the Star Tribune's AI Lab, an internal group experimenting with new technologies, played a crucial role.
What is Faux Cyrillic?
Faux Cyrillic is a method of writing English words using letters from the Cyrillic alphabet (used for Russian and other Slavic languages) that resemble Latin letters. For example, 'R' might be replaced with the Cyrillic 'Я'. While it looks like a foreign language, it is simply a substitution cipher for English, making it confusing to the untrained eye but translatable.
Deploying AI for Rapid Translation
Dana Chiueh, a news innovation engineer at the Star Tribune, began the process of decoding the journal. She took screenshots of the journal pages from the video and uploaded them into an AI model, specifically ChatGPT.
"Breaking news is one of those domains where time is really of the essence, and that’s where technology can really jump in and help out," Chiueh stated. Her first query to the AI was to identify the language. The system correctly identified it as Faux Cyrillic.
The Translation Process
The next step was translation. The AI began converting the cryptic text into readable English. The initial findings were disturbing, revealing writings that glorified other mass murderers, referenced the Sandy Hook and Columbine shootings, and showed a fixation on firearms. Chiueh initially captured the pages manually but later developed a script to automate the process.
Three hours into the investigation, she reached out to the newsroom to collaborate on verifying the AI-generated information. The team understood that while the AI was fast, its output could not be trusted without human confirmation.
Scale of the Data
The translation project involved nearly 200 pages of text, totaling more than 150,000 words in both Cyrillic and English. According to Chiueh, a manual translation of this volume would have likely taken weeks.
The Critical Role of Human Verification
Many journalists, including investigative editor Tom Scheck, were initially skeptical of using AI due to concerns about accuracy and public trust. However, the need for speed in this situation made it a valuable tool, provided there were strict verification protocols.
Reporter Victor Stefanescu contacted two language experts, Giulia Dossi and Anna Pearce, to serve as human verifiers. The team developed a workflow: the AI would provide a quick, large-scale translation, and reporters would then send specific, newsworthy passages to the human experts for confirmation before publication.
Walker Orenstein, the reporter tasked with writing about the shooter's videos, shared this initial skepticism. "My first blush as a reporter was that I don’t trust anything that the AI spits out," he said. "But as I continued to read it, I was like OK, I can see how this is helping immensely."
Catching a Major AI Error
The verification process quickly proved its worth. In one instance, the AI translated a sentence from the journal as, "I have never had a dad or a close friend or family." This suggested a life of isolation and could have become a central point in the shooter's profile.
However, the human translators corrected the passage. The actual text read, "I have never had a death of a close friend or family."
This correction completely changed the meaning of the sentence. The journal, in fact, contained passages where the shooter expressed love and gratitude for her parents. Publishing the AI's initial, incorrect translation would have fundamentally misrepresented the shooter's personal history.
"It was really important to have that human translator as an expert working with us," Orenstein noted, emphasizing that the AI struggled with handwriting and context.
From Translation to Comprehensive Reporting
With a reliable system for translation and verification in place, the Star Tribune team continued its investigation. The translated journal entries provided numerous leads, including the names of the shooter's friends and family, details about visits to gun ranges, and descriptions of her plans.
This digital investigation ran parallel to traditional reporting. More than a dozen journalists conducted interviews with over 50 people, reviewed court records, and analyzed social media profiles. Investigative reporter Jeff Meitrodt focused on building a biography of the shooter, while Orenstein integrated the verified journal passages into a comprehensive profile.
The result was a detailed story published just three days after the shooting. It combined factual, on-the-ground reporting with verified insights from the shooter's own writings, providing the public with a much deeper understanding of the person behind the attack.
The experience demonstrated how newsrooms can ethically and effectively use AI. By treating it as a powerful assistant rather than a source of truth, the Star Tribune was able to navigate a complex investigation with both speed and accuracy.