The AI hardware market is undergoing a structural shift, moving beyond an exclusive focus on Nvidia's GPUs. This change is driven by the adoption of 'orchestration'—an architecture that distributes complex AI workloads across multiple processing channels. Industry analysts, including Morgan Stanley, suggest that as AI becomes more 'agentic,' the demand mix is increasing the relative importance of CPUs and memory chips. Tech giants like Meta and AMD have reinforced this view, emphasizing that no single chip architecture can handle all workloads efficiently. Consequently, components like CPUs, memory systems, and EDA tools are seeing increased investment and market attention.
Ad slot
The focus of the AI hardware market is shifting from an exclusive reliance on high-end GPUs to a broader ecosystem encompassing CPUs and memory chips, driven by advanced system architectures.
The Rise of 'Orchestration' in AI Workloads
The core driver behind this market pivot is the emergence of a new systems architecture known as "orchestration." This approach distributes complex AI workloads across multiple, varied processing channels rather than concentrating them solely on massive, centralized GPU clusters.
Shift in Demand: Orchestration necessitates a higher proportion of traditional Central Processing Units (CPUs) relative to the sheer power of GPUs.
GPU Role: GPUs remain critical for foundational AI tasks, such as the initial training of large models and direct query response.
System Complexity: The increasing complexity of AI software, particularly as it becomes "agentic" (capable of handling generalized, multi-step instructions), is expanding the required compute stack.
Industry Analysis Points to CPU and Memory Growth
Major financial institutions and tech leaders are signaling that the demand curve is broadening beyond GPUs:
Ad slot
Morgan Stanley Insight: Analysts note that "agentic AI will increase the CPU-to-GPU mix" by incorporating tasks related to memory management and tool usage. They caution this increases overall system complexity, shifting incremental spending toward CPUs, networking, and memory.
Meta's Stance: Meta emphasized the need for CPUs, stating that "No single chip architecture can efficiently serve every workload," and highlighted its usage of Graviton CPUs from Amazon's cloud infrastructure.
AMD's View: AMD positioned CPUs as a "strategic pillar of the AI compute stack," essential for enabling efficiency, scalability, and orchestration alongside GPUs.
Expert Commentary on Market Misconceptions
Industry experts are actively correcting the market's perception that AI power is synonymous only with GPUs.
Misunderstanding: Former Deloitte cloud officer David Linthicum stated that equating AI power solely with GPUs is a "misunderstanding." He advised focusing on using CPUs as much as possible when designing systems.
Cybersecurity Parallel: This concept was observed in cybersecurity, where researchers successfully reproduced advanced security findings by orchestrating multiple, less powerful, publicly available models, rather than relying on a single, state-of-the-art system.
Beneficiaries Beyond GPUs
The pivot to orchestration benefits several other segments of the data center supply chain:
Key Components: This includes memory systems (DRAM and NAND), electronic design automation (EDA), and baseboard management controllers.
Beneficiary Stocks: Companies like Micron, Intel, and various memory manufacturers (Samsung, SK Hynix, Kioxia) are benefiting from this broader infrastructure spending.