A new political ad in the Texas Senate race features a hyper-realistic but entirely fake version of the Democratic candidate, created using artificial intelligence. The National Republican Senatorial Committee (NRSC) released the 85-second video, escalating the use of AI technology in political campaigns and intensifying the debate over its ethical and legal implications.
The ad, which targets Democratic nominee James Talarico, shows an AI-generated likeness of the candidate appearing to read and comment on his past social media posts. While the posts are real, the video also includes fabricated commentary, raising new questions about misinformation in elections.
Key Takeaways
- The NRSC released an 85-second deepfake video of Democratic candidate James Talarico.
- The ad uses real tweets from Talarico but adds fabricated, self-praising commentary.
- A small, faint “AI GENERATED” disclosure is present, which experts argue is insufficient for viewers.
- The video highlights a growing trend of AI use in the 2026 midterm elections and fuels calls for federal regulation.
A Hyper-Realistic Political Attack
The video, posted to the social media platform X on March 11, 2026, shows a digital version of James Talarico speaking directly to the camera. The AI-generated figure reads excerpts from tweets the real Talarico posted in 2021 and 2013 concerning transgender issues, race, and religion.
However, the ad goes beyond simply animating past statements. The fake Talarico adds new commentary, making statements like “oh, this one is so touching” and “oh, I love this one too.” There is no evidence the actual candidate ever made these remarks.
Expert Analysis
Hany Farid, a professor at the University of California, Berkeley specializing in digital forensics, described the video as “hyper-realistic.” He noted, “The face and voice are very good. There is a slight misalignment between audio and video, but otherwise this is hyper-realistic and I don’t think that most people would immediately know it is fake.”
The NRSC defended its strategy. A source familiar with the committee's thinking called AI a “consistently effective” tool for visualizing a candidate's real words for voters. NRSC communications director Joanna Rodriguez stated that Democrats are “panicking after hearing James Talarico’s own words.”
The Talarico campaign pushed back strongly. Campaign spokesperson JT Ennis said Republican primary candidates are “scared of James Talarico,” adding, “While they spend their time making deepfake AI videos to mislead Texans, we are uniting the people of Texas to win in November.”
The Debate Over Disclosure and Deception
The ad includes a text disclosure, but its effectiveness is a central point of contention. The words “AI GENERATED” appear in a small, faint font in a bottom corner of the screen for most of the video's duration.
Professor Farid argued that this type of disclosure is inadequate. “I don’t think that faint, small font in the bottom righthand corner comes close to appropriate disclosure because the average person doom scrolling on X/YouTube is simply not going to notice,” he explained. He also expressed concern that campaigns are opening a “Pandora’s box” of deceptive practices.
“These deepfakes are dangerous and wrong. We need protections not just for politics, but for all Americans that could be targeted.”
The use of AI in this manner has prompted bipartisan calls for regulation. Democratic Senator Andy Kim of New Jersey responded to the ad by demanding national action to protect all citizens from being targeted by deepfakes.
Texas Law on Political Deepfakes
Texas has one of the nation's strictest laws regarding political deepfakes, passed in 2019. It makes creating and distributing a deepfake video intended to deceive voters a criminal misdemeanor. However, the law only applies within 30 days of an election. The NRSC ad was released months before the November general election, placing it outside this legal window.
A Growing Trend in the 2026 Midterms
The Talarico ad is not an isolated incident. The 2026 midterm election cycle has seen a significant increase in the use of AI-generated content by both political parties, as the technology becomes more accessible and convincing.
Other Examples of AI in Campaigns
- In the Texas Republican Senate primary, an attack ad from Ken Paxton’s campaign used a fake video of Senator John Cornyn dancing with a Democratic congresswoman.
- John Cornyn's campaign used AI-generated clips to portray a rival as a “show dog.”
- In 2023, Florida Governor Ron DeSantis’s presidential campaign posted fake images of Donald Trump hugging Dr. Anthony Fauci.
- A consultant for Rep. Dean Phillips's presidential campaign hired someone to create an AI version of President Joe Biden's voice to discourage voting in the New Hampshire primary.
Sarah Kreps, director of the Tech Policy Institute at Cornell University, suggested that campaigns are starting to use AI more openly, with disclosures, rather than covertly. She believes this may be a reaction to public backlash against being deceived.
“What we’re likely seeing is a kind of competitive boundary-pushing: once one campaign demonstrates a tactic, others adopt it rather than risk a perceived disadvantage,” Kreps said. She predicts that synthetic media is “likely to become a routine campaign tool” for all parties.
Even when an AI-generated ad generates controversy, the campaign behind it can benefit from the increased attention. The debate itself often amplifies the ad's message, ensuring it reaches a wider audience than it otherwise would have, a reality that suggests this technology is here to stay in the political arena.





