Meta has struck a multiyear deal to expand its data centers with millions of Nvidia’s Grace and Vera CPUs and Blackwell and Rubin GPUs. While Meta has long been using Nvidia’s hardware for its AI products, this deal “represents the first large-scale Nvidia Grace-only deployment,” which Nvidia says will deliver “significant performance-per-watt improvements in [Meta’s] data centers.” The deal also includes plans to add Nvidia’s next-generation Vera CPUs to Meta’s data centers in 2027.
Meta is also working on its own in-house chips for running AI models, but according to the Financial Times, it has run into “technical challenges and rollout delays” with its chip strategy. Nvidia is also dealing with concerns about depreciation and chip-back loans used to finance AI buildouts, as well as the pressure of competition. CNBC notes that Nvidia’s stock dropped four percent after a November report about Meta considering using Google’s Tensor chips for AI, and late last year, AMD announced chip arrangements with both OpenAI and Oracle.
Nvidia and Meta did not disclose how much the deal cost, but this year’s AI spending from Meta, Microsoft, Google, and Amazon is estimated to cost more than the entire Apollo space program.
Source link

