Meta chief executive Mark Zuckerberg said on October 30 that the next major version of the company's open-source artificial intelligence (AI) model, Llama 4, will likely be ready early next year, as frenzied AI arms race among tech giants heats up.
"The Llama 3 models have been something of an inflection point in the industry, but I'm even more excited about Llama 4, which is now well into its development," Zuckerberg said at the earnings conference call.
The Meta chief said that smaller AI models will be released first and will be a major upgrade over Llama 3 across several areas, including new modalities, capabilities, stronger reasoning, and much faster performance.
"We're training the Llama 4 models on a cluster that is bigger than 100,000 H100s or bigger than anything that I've seen reported for what others are doing," he said. The H100 is a graphics processing unit (GPU) chip manufactured by Nvidia that is popular to train large AI models.
Read: Meta bets on India's AI talent with Llama 3.1, Hindi support
This launch is expected to further escalate the intense competition among rivals Google, OpenAI, Microsoft and Elon Musk's xAI, who are racing to outdo each other in debuting their next-generation frontier models.
On October 29, Google chief executive Sundar Pichai said the search giant is working on the third generation of Gemini models and it is "progressing well". Google and OpenAI are eyeing a December debut for their flagship AI models, according to a report by The Verge. Tech billionaire Elon Musk also said in July that xAI plans to launch Grok 3 in December.
This intensifying competition also means increased AI spending. Meta said it anticipates a "significant acceleration" in infrastructure expenses next year due to expanded investments in servers, data centers, and other infrastructure.
The firm has hiked its annual spending forecast to $38-40 billion, up from $37-40 billion it had projected in July.
During the earnings call, Zuckerberg acknowledged that increasing their spending to build out the infrastructure "is, maybe, not what investors want to hear" in the near term. "But I just think that the opportunities here are really big. We're going to continue investing significantly in this."
Read: Meta AI has more than 500 million monthly active users, says Mark Zuckerberg
Growing momentum
Zuckerberg also mentioned that they are witnessing strong momentum in the adoption of Llama AI models. "Llama token usage has grown exponentially this year, and the more widely that Llama gets adopted and becomes the industry standard, the more that the improvements to its quality and efficiency will flow back to all of our products," he said.
He stated that the company is working with the public sector to adopt Llama across the US government and also making it easier to use for enterprises.
Meta recently stated that the Llama models have been downloaded over 400 million times, marking a 10-fold increase since 2023. Developers have also created over 65,000 derivative models to date, said Ragavan Srinivasan, Meta vice-president of product management, at a company event earlier this month.
The social networking giant has also rapidly iterated on newer Llama models that introduce various capabilities to strengthen them. After releasing Llama 3 in April, it introduced Llama 3.1 in July, which added support for seven languages, including Hindi, and new licensing terms that made it easier for developers to use synthetic data generated by Llama models to create derivative models or train other models.
In September, Meta introduced Llama 3.2 that comes with multimodal support, which means it can understand different formats such as text and images at the same time. Last week, the firm introduced lightweight quantised versions of Llama models that can run on smartphones and tablets.
Over the past week, Infosys Chairman Nandan Nilekani and Reliance Industries Chairman Mukesh Ambani have each commended Meta’s efforts to open-source Llama. While speaking at Meta's Build with AI summit in Bengaluru on October 23, Nilekani said the move was a "game changer for us in India and something we need to take full advantage of".
In a conversation with Nvidia chief executive Jensen Huang in Mumbai on October 24, Ambani said he has "great respect for my friend Mark Zuckerberg because, by bringing open source to the world of intelligence, he has given everybody the opportunity to participate in this revolution".
Why open source matters
During the call, Zuckerberg said that open source will be the most cost-effective, customisable, trustworthy, performant, and easiest-to-use option that is available to developers.
"One of the big costs here is chips. In a lot of the infrastructure, what we're seeing is that as Llama gets adopted more, you're seeing companies like NVIDIA and AMD optimise their chips more to run Llama specifically well, which clearly benefits us," he said.
"It benefits everyone who's using Llama and makes our products better, rather than if we were just on an island, building a model that no one was kind of standardising around in the industry...That's why I think it's good business for us to do this in an open way," Zuckerberg said.
Disclaimer: Moneycontrol is part of the Network18 group. Network18 is controlled by Independent Media Trust, of which Reliance Industries is the sole beneficiary.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.