BN
|
TechAI Desk2 views

Meta Rolls Out In-House AI Chips Weeks After Massive Nvidia, AMD Deals

Meta has deployed its first custom AI chips, the MTIA 300, and is testing the MTIA 400, with more models planned for 2027. These in-house accelerators, manufactured by TSMC, target specific AI tasks like recommendation systems and generative inference but not large LLM training. Simultaneously, Meta is securing large-scale GPU supplies from Nvidia and AMD while expanding its U.S. data center footprint. The company cites supply chain diversification and cost optimization as key goals, though it acknowledges potential HBM shortages. This move aligns with industry trends where tech firms develop proprietary silicon to reduce reliance on external vendors.

Ad slot
Meta Rolls Out In-House AI Chips Weeks After Massive Nvidia, AMD Deals

Meta has unveiled four custom AI accelerator chips under its MTIA family, with the MTIA 300 already deployed and the MTIA 400 nearing rollout, as the company simultaneously secures millions of GPUs from Nvidia and AMD to bolster its AI data centers.

Meta's Custom AI Chip Lineup

  • MTIA 300: Deployed a few weeks ago for training smaller AI models that power core tasks like content ranking and ads on Facebook and Instagram.
  • MTIA 400: Completed testing and is on the path to deployment; optimized for generative AI inference tasks such as image and video generation from text prompts.
  • MTIA 450 and MTIA 500: Planned for operation in 2027, targeting advanced generative AI inference.
  • These chips are not intended for training large language models (LLMs).

Manufacturing and Supply Chain Strategy

  • All MTIA chips are manufactured by Taiwan Semiconductor Manufacturing Company (TSMC).
  • Meta aims to improve cost per performance and diversify silicon supply to mitigate price volatility and vendor dependency.
  • The company acknowledges concerns about high-bandwidth memory (HBM) supply but states it has secured enough for current plans, though it didn't detail contract terms.
  • A diversified supply chain approach is emphasized, with hundreds of U.S.-based engineers leading the silicon development.
Ad slot

Industry Context and Competitive Landscape

  • Tech giants like Google (with TPUs since 2015) and Amazon (since 2018) have developed in-house ASICs, but Meta's MTIA chips are solely for internal use, not offered via cloud services.
  • Meta's rapid chip release cadence—aiming for a new chip every six months—is driven by swift data center expansion and high capital expenditures.
  • The chips are designed as application-specific accelerators, offering cost and efficiency benefits over general-purpose GPUs but with narrower task applicability.

Data Center Expansion and GPU Partnerships

  • Meta is constructing massive data centers, including a 5-gigawatt Hyperion facility in Louisiana, and sites in Ohio and Indiana.
  • Recent deals involve purchasing millions of Nvidia GPUs and up to 6 gigawatts of AMD GPUs over multiple years to complement in-house chips.
  • The company is reportedly exploring space at the Stargate site in Texas after OpenAI and Oracle scaled back expansion plans.

Future Outlook and Operational Details

  • MTIA chips are expected to have a standard useful lifetime of over five years.
  • Of Meta's 30 total operational and planned data centers, 26 are located in the U.S., reflecting a domestic focus for infrastructure.
  • The dual strategy of in-house chip development and external GPU procurement aims to balance innovation, cost, and supply chain resilience amid rising AI demands.
Ad slot