OpenAI announced it has disrupted more than 40 malicious networks attempting to misuse its artificial intelligence models since early 2024. In its latest quarterly threat report published in October 2025, the company detailed its ongoing efforts to combat covert influence operations, scams, and cybercrime originating from a range of global actors.
The report emphasizes that while malicious groups are using AI to accelerate their activities, the technology has not yet granted them fundamentally new and dangerous capabilities. Instead, AI is being integrated into existing strategies to increase the speed and scale of their operations.
Key Takeaways
- OpenAI has taken down over 40 malicious networks since it began public threat reporting in February 2024.
- The company's October 2025 report focuses on abuses during the third quarter of the year.
- Threat actors are primarily using AI to enhance existing tactics like scams and influence campaigns, not to create novel threats.
- OpenAI's response includes banning accounts and sharing threat intelligence with industry and security partners.
A Proactive Stance on AI Misuse
OpenAI has framed its public threat reporting as a core part of its mission to ensure artificial general intelligence (AGI) is developed and deployed safely. The initiative, which started in February 2024, aims to provide transparency about how malicious actors attempt to exploit AI tools and the countermeasures being taken.
The company stated its goal is to build a democratic AI framework grounded in rules that protect users from tangible harms. This involves a continuous process of monitoring, detection, and disruption of activities that violate its usage policies. By publicly sharing its findings, OpenAI hopes to raise broader awareness and foster a collaborative defense ecosystem.
Background on Threat Reporting
Major technology companies have increasingly adopted public threat reporting to inform users, researchers, and policymakers about coordinated inauthentic behavior and other security risks. This practice helps create a shared understanding of evolving threats and encourages cross-platform collaboration to address them.
The latest report is a continuation of this effort, providing specific case studies from the third quarter of 2025. The research was compiled by a team of security and intelligence experts, including Ben Nimmo, Kimo Bumanglag, and Michael Flossman, among others.
Key Findings from the October 2025 Report
A central conclusion from the October 2025 update is that malicious actors are currently using AI as an efficiency tool rather than a weapon for creating entirely new types of attacks. This observation is consistent with findings from previous reports and analysis from the broader cybersecurity community.
Threat actors are leveraging AI models to generate content, write code, and craft messages more quickly. However, the underlying strategies remain familiar. According to the report, these groups are essentially "bolting AI onto old playbooks" to operate with greater speed, not to invent novel offensive capabilities derived from the models themselves.
AI as an Accelerator
The report highlights that the primary advantage AI offers malicious actors is speed and scale. For example, a single operator can generate text for hundreds of social media posts or phishing emails in the time it previously took to write a few, significantly increasing the potential reach of their campaigns.
This distinction is crucial for understanding the current threat landscape. It suggests that defensive strategies should focus on detecting and mitigating known malicious behaviors that are now occurring at a higher frequency, rather than searching for entirely new and exotic AI-powered attacks.
Types of Disrupted Activities
OpenAI's enforcement actions targeted a variety of policy violations. The report categorizes the disrupted activities into several key areas:
- Covert Influence Operations: These campaigns aim to manipulate public opinion or political discourse in a deceptive manner. Actors used AI to generate articles, social media comments, and other content for their networks.
- Malicious Cyber Activity: This includes using AI tools to assist in tasks related to hacking, such as scripting, malware research, and drafting phishing emails.
- Scams and Fraud: Threat actors utilized AI to create deceptive content for financial scams, attempting to defraud individuals and organizations.
- Authoritarian State Use: The report notes actions taken against networks linked to authoritarian regimes that sought to use AI for purposes of social control or coercion against other states.
By identifying and disrupting these networks, OpenAI aims to prevent its tools from becoming enablers of such harmful activities. The company's policies explicitly prohibit the use of its models for illegal activities, hate speech, and influence operations.
Enforcement and Collaborative Defense
OpenAI's strategy for combating misuse involves a multi-layered approach that combines internal enforcement with external collaboration. When a violation of its policies is detected, the primary action is to disable or ban the associated accounts, cutting off the actors' access to the platform.
"When activity violates our policies, we ban accounts and, where appropriate, share insights with partners."
This internal action is often just the first step. The company emphasizes the importance of sharing information with a wider network of stakeholders. This includes other technology platforms, cybersecurity firms, researchers, and law enforcement agencies. Such collaboration allows for a more robust, industry-wide response, as malicious actors often operate across multiple services and platforms.
This collaborative model is essential because a threat actor banned from one service will likely attempt to move to another. By sharing indicators of compromise—such as tactics, techniques, and procedures (TTPs)—the entire digital ecosystem can become more resilient against these persistent threats.
The Importance of Public Disclosure
The decision to publicly report on these disruptions serves several strategic purposes. First, it creates a deterrent effect by demonstrating that malicious use is being actively monitored and acted upon. Second, it educates the public and potential targets about the types of AI-assisted threats they might encounter.
Finally, transparency contributes to the development of better safety standards across the AI industry. By sharing case studies and methodologies, OpenAI provides valuable data for other AI developers and safety researchers working on similar challenges. This open approach is seen as critical for building public trust and ensuring that AI technology evolves in a beneficial direction for society.
As AI capabilities continue to advance, the nature of these threats will likely evolve. Continuous monitoring, rapid response, and deep collaboration will remain essential components in the ongoing effort to prevent the malicious use of artificial intelligence.





