Meta is postponing the launch of its next-generation foundational artificial intelligence model after internal tests showed it did not perform as well as competing systems from rivals like Google and OpenAI. The delay represents a significant challenge in CEO Mark Zuckerberg's multi-billion dollar push to establish the company as a leader in the AI space.
The model, internally codenamed Avocado, was originally scheduled for release this month but is now expected no earlier than May. Sources familiar with the matter indicate that while the new system surpasses Meta's previous efforts, it has not yet reached the performance benchmarks set by the industry's top models.
Key Takeaways
- Meta has delayed the release of its new AI model, 'Avocado,' from March to at least May 2026.
- Internal testing revealed the model's performance on reasoning and coding tasks trails behind top models from Google and OpenAI.
- The company has discussed temporarily licensing Google's Gemini model as a potential stopgap measure.
- The setback comes despite Meta investing billions into AI development, including a projected $135 billion in spending this year.
A High-Stakes Race for AI Supremacy
The delay highlights the intense pressure within the technology sector to develop cutting-edge AI. Mark Zuckerberg has publicly committed to making Meta a dominant force in artificial intelligence, a strategy backed by massive financial investment. The company has allocated hundreds of billions for data center construction and projects an annual spend of up to $135 billion this year alone.
This figure is nearly double the $72 billion spent in the previous year, underscoring the scale of Meta's ambition. The goal, as stated by Zuckerberg, is to "push the frontier" of AI capabilities. However, the current performance of the Avocado model places that immediate goal in jeopardy.
While Avocado showed improvements over Meta's last model, Llama 4, and even outperformed Google's Gemini 2.5 from March, it reportedly struggles to match the capabilities of Google's more recent Gemini 3.0, released in November.
The Competitive Landscape
The field of foundational AI models is currently led by a small group of companies. OpenAI's GPT series, Google's Gemini family, and Anthropic's Claude models are widely considered the industry benchmarks. These complex systems form the technological backbone for a wide range of applications, from advanced chatbots to sophisticated coding assistants and content generators. Falling behind in this core technology can impact a company's ability to innovate and attract top engineering talent.
Internal Reorganization and Strategic Debates
To accelerate its AI progress, Meta made a significant move last year by investing $14.3 billion in the startup Scale AI and appointing its CEO, Alexandr Wang, as Meta's chief AI officer. Wang was tasked with leading a new, elite research group named TBD Lab, which is responsible for developing both Avocado and an image generation model called Mango.
The lab, composed of roughly 100 researchers, completed the initial "pre-training" phase for Avocado at the end of last year. The subsequent "post-training" phase began in January, with an initial target release date of mid-March.
The development process has not been without internal friction. Discussions have occurred regarding the fundamental strategy for the new model. A key debate centers on whether to make it "open source," allowing outside developers to build upon the technology, or keep it "closed" and proprietary. While Meta has historically favored an open-source approach, leadership reportedly leaned toward a closed model for this new generation.
By the Numbers: Meta's AI Investment
- $135 Billion: Projected spending for the current year.
- $600 Billion: Committed to building data centers to power AI.
- $14.3 Billion: Investment in Scale AI, which brought its CEO to Meta.
Navigating Setbacks and Future Plans
The performance gap has forced Meta's leadership to consider alternative strategies. One option that has been discussed internally is the possibility of licensing a rival's technology, specifically Google's Gemini, to power Meta's AI products in the short term. No final decision on this has been made.
In response to the challenges, Meta recently announced the creation of a new AI engineering team under Chief Technology Officer Andrew Bosworth. This team is designed to collaborate more closely with Wang's TBD Lab, suggesting a move to better integrate the research efforts with the company's broader product goals, particularly in its lucrative advertising business.
"As we’ve said publicly, our next model will be good but, more importantly, show the rapid trajectory we’re on, and then we’ll steadily push the frontier over the course of the year as we continue to release new models," a Meta spokesman, Dave Arnold, stated on Thursday.
Zuckerberg himself has sought to manage expectations in recent months. In a January call with investors, he noted, "I expect our first models will be good, but more importantly will show the rapid trajectory we’re on." Despite the current delay, the company is already planning its next iteration, which is reportedly codenamed Watermelon, signaling a long-term commitment to the AI race regardless of immediate setbacks.





