Tech5 views5 min read

AI Security Firm Irregular Secures $80 Million in Funding

AI security startup Irregular has raised $80 million to test advanced AI models for vulnerabilities, working with clients like OpenAI, Anthropic, and the UK government.

James Mitchell
By
James Mitchell

James Mitchell is a technology journalist for Neurozzio, specializing in artificial intelligence, venture capital, and deep tech startups. He covers funding rounds, emerging technologies, and the intersection of AI and cybersecurity.

Author Profile
AI Security Firm Irregular Secures $80 Million in Funding

Irregular, an artificial intelligence security startup, has emerged from a period of quiet development with $80 million in new funding. The company, founded in 2023, specializes in testing the vulnerabilities of advanced AI models for major clients, including OpenAI, Anthropic, and government agencies.

The funding was secured in two separate rounds. The initial round brought in $30 million, led by Sequoia. A subsequent round added approximately $50 million from investors including Sequoia, Redpoint, and several angel investors. Irregular focuses on identifying potential misuse and security flaws in next-generation AI systems before they are widely deployed.

Key Takeaways

  • Total Funding: Irregular has raised a total of $80 million across two recent funding rounds.
  • Prominent Investors: Key backers include Sequoia, Redpoint, Swish Ventures, Assaf Rappaport, and Ofir Ehrlich.
  • Core Business: The company stress-tests advanced AI models to identify security risks and potential for misuse.
  • High-Profile Clients: Irregular is already working with leading AI labs like OpenAI and Anthropic, as well as the UK government.
  • Founders: The startup was established by CEO Dan Lahav and CTO Omer Nevo, who have backgrounds at Google and IBM.

Details of the $80 Million Investment

Irregular, which previously operated under the name Pattern Labs, has successfully closed two significant funding rounds in quick succession. The first investment was a $30 million round led by the venture capital firm Sequoia.

Shortly after, a second round of about $50 million was completed. Sequoia participated again, joined by Redpoint, Omri Caspi’s Swish Ventures, and a group of angel investors. This group was led by prominent figures in the tech industry, including Wiz co-founder Assaf Rappaport and Eon co-founder Ofir Ehrlich.

Despite being founded only in 2023, the company reports it is already generating millions of dollars in revenue. It currently employs a team of 25 people, with the majority of its staff based in Israel.

Company Snapshot

  • Founded: 2023
  • Total Employees: 25
  • Headquarters: Operations primarily based in Israel
  • Reported Revenue: Several million dollars annually

Proactively Securing Next-Generation AI

Irregular’s primary mission is to evaluate how advanced AI models perform under pressure from real-world threats. The company runs controlled simulations to test the resilience and potential vulnerabilities of these systems.

These tests are designed to anticipate how AI could be misused. Scenarios include attempts to bypass antivirus software, conduct autonomous cyberattacks, and map out system environments for malicious purposes. The goal is to provide AI developers and operators with a secure method to discover and fix flaws before a model is released to the public.

"There is a new market that is opening up, an emerging AI frontier where a proactive approach is essential," said Dan Lahav, CEO of Irregular. "We aim to understand these systems from the inside, anticipate potential damage, and work directly with the systems themselves."

The company works closely with its clients, which include some of the most prominent AI labs in the world. According to the founders, this collaboration is central to their work. "We have revenues coming from the largest labs in the world like OpenAI and Anthropic. We have research work with them, and we are at the heart of the activity with them," they stated.

The Growing Need for AI Security

As artificial intelligence models become more powerful and integrated into critical infrastructure, the potential risks associated with them also increase. A single vulnerability or malfunction could have widespread consequences. Companies like Irregular are part of a growing industry focused on ensuring that AI development proceeds safely and responsibly, addressing security concerns before they can be exploited.

Experienced Founders with Diverse Backgrounds

The leadership team at Irregular brings extensive experience from major technology companies and a unique shared history in competitive debate.

Dan Lahav, CEO

CEO Dan Lahav previously worked at LabPixies, a startup that was acquired by Google. He later served as an AI researcher at IBM, where he earned the company's Outstanding Technical Achievement Award. Lahav has also published research in scientific journals, including a feature in Nature.

Omer Nevo, CTO

CTO Omer Nevo is a serial entrepreneur who was a development manager at Google Research. While there, he led several AI projects, including the development of models for wildfire detection. Both founders are highly accomplished in competitive debate, with Nevo being a world debate champion and Lahav holding the highest personal ranking in the history of the world championship.

Technology and Future Outlook

Irregular utilizes advanced techniques to conduct its security evaluations safely. The company employs confidential inference and hardware-based verification, which allows AI labs to test their models for cyber risks without exposing sensitive information.

This capability enables Irregular to assess the safety of AI systems even before they are publicly launched or widely implemented. By providing a secure environment for rigorous testing, the company helps organizations build defenses against potential misuse.

With the rapid adoption of AI across all sectors, the demand for specialized security and resilience testing is expected to grow significantly. Irregular's early partnerships and substantial funding position it to play a key role in ensuring the safe and secure deployment of future AI technologies.