A sprawling Chinese influence operation targeting dissidents abroad was accidentally exposed after a Chinese law enforcement official used ChatGPT to document the covert campaign, a new report from OpenAI reveals. The platform, which was used like a digital diary, provided a detailed account of tactics aimed at intimidating and silencing critics of the Chinese Communist Party.
The operation involved hundreds of operators managing thousands of fake online accounts across multiple social media platforms. The incident highlights the unexpected ways in which artificial intelligence tools are becoming entangled in global information warfare and transnational repression efforts.
Key Takeaways
- A Chinese law enforcement official used ChatGPT to keep a log of a covert influence operation, leading to its discovery by OpenAI.
- The campaign targeted Chinese dissidents in other countries, employing tactics like impersonating U.S. immigration officials and faking a dissident's death.
- The network involved hundreds of Chinese operators and thousands of fraudulent social media accounts.
- This case illustrates how AI tools can be used by state actors to organize and document censorship and repression campaigns.
An Unlikely Digital Diary
Investigators at OpenAI uncovered the operation when they identified a user who was systematically documenting a covert campaign. This user, identified as a Chinese law enforcement official, treated the AI tool as a personal journal to track the network's activities. OpenAI subsequently banned the user and analyzed the logs.
The entries provided a clear window into the methods and goals of the campaign. Ben Nimmo, a principal investigator at OpenAI, described the findings as a look into modern transnational repression. He explained that the efforts were industrialized and designed to attack critics of the Chinese Communist Party from all angles simultaneously.
"This is what Chinese modern transnational repression looks like. It’s not just digital. It’s not just about trolling. It’s industrialized. It’s about trying to hit critics of the CCP with everything, everywhere, all at once."
While ChatGPT was used to document the operation, the content spread by the network was largely generated by other tools. The information was then disseminated through a vast web of fake accounts and websites designed to amplify the messages.
Tactics of Intimidation and Deception
The logs detailed several specific actions taken by the network to harass and discredit Chinese dissidents living outside of China. OpenAI investigators were able to cross-reference the user's descriptions with real-world online events, confirming the campaign's impact.
Impersonation and Forgery
In one documented instance, operators within the network allegedly posed as U.S. immigration officials. They contacted a U.S.-based Chinese dissident and warned them that their public statements had supposedly violated the law, a clear intimidation tactic.
Another effort involved the use of forged documents. The operators attempted to use fake documents from a U.S. county court to convince a social media platform to suspend a dissident's account. This demonstrates a sophisticated approach that blends online activity with fabricated real-world evidence.
Faking a Dissident's Death
One of the most disturbing tactics documented was an attempt to fake the death of a prominent Chinese dissident. The ChatGPT user described creating a phony obituary and manipulated photos of a gravestone. These materials were then posted online to spread false rumors, an effort that was reported by Voice of America in 2023.
Targeting Foreign Leaders
The campaign's scope was not limited to individual dissidents. The user also attempted to use ChatGPT to generate a multi-part plan to denigrate Sanae Takaichi, the incoming Japanese prime minister. The plan involved stoking online anger over U.S. tariffs on Japanese goods to create a negative public perception.
According to OpenAI, ChatGPT refused to fulfill this malicious prompt. However, researchers later observed hashtags attacking Takaichi and complaining about U.S. tariffs appearing on a popular Japanese online forum around the time she took office, suggesting the plan was carried out using other means.
Broader Implications for AI and Geopolitics
This report emerges amid escalating competition between the United States and China for dominance in artificial intelligence. The incident underscores how these powerful technologies can be repurposed for state surveillance and information control.
Michael Horowitz, a former Pentagon official focused on emerging technologies and now a professor at the University of Pennsylvania, commented on the findings. He stated that the report "clearly demonstrates the way that China is actively employing AI tools to enhance information operations."
The US-China AI Competition
The rivalry between the two nations extends from military applications to corporate boardrooms. The development and control of advanced AI are seen as critical for future economic and military superiority. This event shows how the "day-to-day" implementation of surveillance and information control is also part of this competition.
The OpenAI report serves as a stark reminder of the dual-use nature of AI. While these tools offer immense potential for productivity and innovation, they also present new avenues for authoritarian regimes to extend their reach and suppress dissent beyond their borders.
The accidental nature of this discovery also raises questions about what other, more carefully concealed operations may be underway. As AI becomes more integrated into daily workflows, the potential for both intentional misuse and accidental exposure is likely to grow, creating new challenges for technology companies and policymakers alike.





