As artificial intelligence becomes more accessible, state governments are taking action to regulate its use in political advertising. Nevada has joined a growing number of states by implementing a law that requires campaigns to disclose AI-generated content, a move aimed at preventing voter deception through technologies like deepfakes.
This legislative trend reflects a broader concern about the potential for AI to spread misinformation. However, analysis suggests the primary threat may not come from official political campaigns, but from independent actors seeking to disrupt elections. At the same time, AI offers new tools for candidates, particularly those with limited funding, to create professional-quality campaign materials.
Key Takeaways
- Nevada is one of 25 states that now require disclosure of AI-generated content in political advertisements.
- Two other states have completely banned the use of political "deepfake" content.
- A major incident in New Hampshire involved AI-generated robocalls impersonating President Joe Biden, leading to a $6 million fine for the consultant responsible.
- Experts suggest that established campaigns are unlikely to risk using deceptive deepfakes due to severe legal and reputational consequences.
- AI is also being used by campaigns as a cost-effective tool for creating standard political advertisements and enhancing visual content.
A Wave of New State Regulations
Concerns over AI-generated content influencing voters have prompted swift legislative action across the United States. Nevada's law, known as AB73, is part of a national movement to bring transparency to the use of AI in political messaging. The law mandates that any advertisement containing content created with artificial intelligence must clearly disclose that fact.
This places Nevada among a group of 25 states with similar disclosure requirements. Lawmakers are primarily focused on the threat of "deepfakes," which are highly realistic but fabricated videos or audio recordings. The fear is that such content could be used to create false narratives or depict candidates saying or doing things they never did, causing significant confusion among the electorate.
A smaller number of states have taken an even stricter stance. According to reports, two states have moved beyond disclosure and have instituted outright bans on the use of deceptive deepfake content in political materials.
The New Hampshire Robocall Incident
The potential for AI misuse was demonstrated in January 2024 during the New Hampshire Democratic primary. Voters received robocalls featuring a voice that sounded identical to President Joe Biden, urging them not to participate in the primary election. The voice was not real; it was an AI-generated clone.
"The consultant was fined $6 million by the Federal Communications Commission and indicted on criminal charges for the stunt."
The incident was traced back to a Democratic political consultant. While the consultant faced severe penalties, including a multi-million dollar fine from the FCC and criminal charges, the event highlighted the ease with which this technology can be deployed to spread misinformation.
Defining 'Deepfake' Technology
The term "deepfake" combines "deep learning" and "fake." It refers to synthetic media created using artificial intelligence techniques. These AI models can be trained on existing videos and audio of a person to generate new, fabricated content where the individual appears to say or do things that never happened. The technology has become increasingly sophisticated and accessible.
Distinguishing Threats: Campaigns vs. Bad Actors
While regulators are focused on rules for political campaigns, some analysts argue that the most significant threat of AI-driven misinformation comes from a different source. They believe that established political campaigns are unlikely to risk using deceptive AI due to the high stakes involved.
For a serious candidate, being caught using a malicious deepfake could result in campaign-ending reputational damage, civil lawsuits, and even criminal prosecution. The potential costs are seen as a powerful deterrent, especially in local and state-level elections where public trust is crucial.
Instead, the more probable source of abuse is from independent bad actors. These individuals or groups often operate outside the established political system and may already be involved in spreading conspiracy theories or disinformation online. Their goal is often to sow chaos and distrust in institutions, and they are less concerned with legal or ethical boundaries.
Public Perception and Trust
According to various polls, public trust in both media and government institutions has been declining for years. Malicious actors who use AI to create deepfakes can exploit this existing distrust to make their fabricated content seem more believable to certain segments of the population.
These actors can leverage AI to create convincing fake evidence to support false narratives. This type of threat is more difficult to regulate through campaign finance laws, as these individuals are not officially affiliated with any candidate or political party.
AI as a Tool for Campaign Equality
Despite the risks, artificial intelligence is not viewed solely as a threat in the political arena. Many see it as a tool that can democratize the creation of campaign materials, leveling the playing field between well-funded candidates and their less affluent rivals.
High-quality video production, graphic design, and targeted voter outreach have traditionally been very expensive, giving an advantage to candidates with access to large donors and special interest funding. AI tools can significantly lower these costs.
- Content Creation: AI can help generate graphics, write email drafts, and even produce simple video ads quickly and affordably.
- Voter Targeting: AI algorithms can analyze data to help campaigns identify and reach specific voter demographics more efficiently.
- Communications Scaling: For candidates with small teams, AI can automate tasks and allow them to scale their communication efforts without a large budget.
Responsible AI Use in Nevada
So far, the use of AI by official campaigns in states like Nevada has been more creative than deceptive. Candidates have used the technology to produce imaginative and visually engaging ads rather than to manufacture hoaxes.
For example, a candidate for Reno's city council used AI to create a humorous ad featuring spaceships. Another congressional candidate used it to portray political rivals as characters in a classic mobster film. In these cases, AI was used as a supplemental tool for creative expression, not for fabricating reality.
This suggests that for official campaigns, the primary application of AI will be to enhance existing forms of political messaging. It allows them to produce the same types of ads seen on television and in mailers, but with a higher production quality than their budget would normally allow. This evolution is seen by some as a positive development for a campaign environment often dominated by money.