A new report from The Conference Board reveals a dramatic increase in the number of S&P 500 companies identifying artificial intelligence as a material risk. Over 70% of these major public corporations now include AI in their risk disclosures, a significant jump from just 12% in 2023, signaling a major shift in how corporate leadership views the technology's impact.
This surge reflects the rapid integration of AI from an experimental technology into core business operations, prompting boards and executives to formally acknowledge potential downsides, including reputational damage, cybersecurity threats, and regulatory challenges.
Key Takeaways
- Over 70% of S&P 500 companies now list AI as a material risk in public filings, up from 12% in 2023.
- Reputational risk is the most frequently cited concern, mentioned by 38% of companies.
- Cybersecurity vulnerabilities associated with AI are disclosed by 20% of the firms.
- A separate PwC survey shows only 35% of corporate boards have formally integrated AI into their oversight, indicating a governance gap.
A Rapid Shift in Corporate Risk Perception
The findings, detailed in a report released on Friday by The Conference Board, highlight how quickly artificial intelligence has moved from a peripheral topic to a central concern for the world's largest businesses. The increase from 12% to over 70% in just one year illustrates the speed of AI adoption across major enterprises.
This change indicates that AI is no longer confined to research and development labs. Instead, it has become deeply integrated into fundamental business processes, forcing companies to re-evaluate their risk landscapes.
"This is a powerful reflection of how quickly AI has developed from a niche topic to widely adopted and embedded in the organization," Andrew Jones, a principal researcher at the Conference Board Governance & Sustainability Center, stated in an email to Cybersecurity Dive.
According to Jones, AI is now a critical component in areas such as product design, logistics management, credit risk modeling, and customer service interfaces. As dependency on these systems grows, so does the potential for significant disruption if they fail or are compromised.
The Primary Concerns Voiced by Top Companies
The report provides a detailed breakdown of the specific AI-related risks that companies are disclosing. The concerns are not limited to technical failures but extend to business reputation, legal compliance, and security infrastructure.
Reputational Damage Leads the List
The most common concern cited is reputational risk, which was mentioned by 38% of the companies. This reflects corporate anxiety over maintaining public trust. A failure in an AI-powered system can have immediate and widespread consequences for a brand's image.
Potential issues include breakdowns in customer-facing services, the mishandling of sensitive consumer data, or the deployment of a tool that delivers biased or inaccurate results. In a competitive market, loss of customer trust can be difficult to regain.
By the Numbers: Top AI Risks
- Reputational Risk: 38%
- Cybersecurity Risk: 20%
- Legal & Regulatory Risk: A major, frequently cited issue.
Cybersecurity and Regulatory Hurdles
Cybersecurity was the second most-cited risk, with 20% of firms flagging it as a key concern. The integration of AI technologies expands a company's digital footprint, creating what is known as a larger "attack surface" for malicious actors.
Companies are also worried about vulnerabilities introduced through third-party AI applications and services, which may not meet their internal security standards. These external tools can create new entry points for data breaches and other cyberattacks.
Alongside security, legal and regulatory risks are a significant issue. Governments at both the state and federal levels are moving quickly to establish rules and guardrails for AI. This evolving legal landscape creates uncertainty for businesses trying to innovate while remaining compliant.
Governance Structures Lag Behind AI Adoption
While companies are increasingly aware of the risks, their internal governance structures are still catching up. Data from a separate study, the PwC "2025 Annual Corporate Director’s Survey," reveals a potential gap between risk awareness and board-level oversight.
The PwC survey found that only 35% of corporate boards have formally integrated AI into their oversight responsibilities. This suggests that many companies are still in the process of developing the strict frameworks needed to manage the technology responsibly.
The Governance Challenge
The gap between rapid AI deployment and formal oversight is a critical challenge. Without clear governance, companies risk inconsistent application of AI ethics, inadequate risk mitigation, and a lack of accountability when systems fail. Establishing clear lines of responsibility at the board level is seen as a crucial next step for mature AI adoption.
Corporate directors are grappling with balancing the strategic advantages of AI with their duty to manage potential risks. The path forward involves continuous education and the creation of clear internal policies.
"Directors recognize that AI brings both strategic opportunity and fiduciary risk, and many are starting to consider how to strengthen governance through regular education, clear oversight structures, and responsible-use frameworks," said Ray Garcia, leader of PwC’s Governance Insights Center.
As AI becomes more integral to business success, the development of robust governance will be essential for long-term stability and sustainable innovation. The sharp rise in risk disclosures is the first step in this broader corporate evolution.





