Advertising

Meta’s Llama 4 Model Will Require 10x More Computing Power, Says Mark Zuckerberg

Meta, the company behind one of the largest open-source language models called Llama, recognizes the need for increased computing power to train future models. Mark Zuckerberg, speaking on Meta’s second-quarter earnings call, emphasized the importance of building capacity to train models rather than falling behind competitors. He stated that training Llama 4 would require nearly 10 times more computing power than Llama 3.

Zuckerberg acknowledged the difficulty of predicting future trends but expressed a preference for proactively building capacity ahead of time. He highlighted the long lead times required for spinning up new inference projects, emphasizing the need to stay ahead in the rapidly evolving AI landscape.

In April, Meta released Llama 3 with 80 billion parameters, making it one of their significant open-source models. Last week, they unveiled an upgraded version called Llama 3.1 405B, which boasted a staggering 405 billion parameters. This release further solidified Meta’s commitment to pushing the boundaries of language models.

Meta’s CFO, Susan Li, revealed that the company is considering different data center projects and investing in building capacity for training future AI models. This strategic investment is expected to drive increased capital expenditures in 2025.

Training large language models is an expensive endeavor. Meta’s capital expenditures rose by nearly 33% to $8.5 billion in Q2 2024 compared to the previous year, primarily due to investments in servers, data centers, and network infrastructure. The Information reported that OpenAI, another major player in the field, spends $3 billion on model training and an additional $4 billion on server rentals from Microsoft at a discounted rate.

Susan Li further explained Meta’s approach to infrastructure development, stating that as they scale generative AI training capacity, they aim to maintain flexibility in how they use it. This flexibility allows them to direct training capacity to different areas, such as gen AI inference or their core ranking and recommendation work, depending on where it adds the most value.

While Meta discussed the usage of its consumer-facing Meta AI, highlighting India as its largest market for chatbot interactions, they do not expect gen AI products to contribute significantly to revenue at this time. However, their focus on building capacity for training future models reflects their long-term vision and commitment to staying at the forefront of AI technology.