Home / Technology / Qualcomm Challenges Nvidia and AMD in Data Center AI Chip Race
Qualcomm Challenges Nvidia and AMD in Data Center AI Chip Race
27 Oct
Summary
- Qualcomm launching new AI200 and AI250 chips for data centers
- Chips designed for AI inference, not training
- Touting low power consumption and total cost of ownership

In a major strategic shift, Qualcomm is set to challenge industry leaders Nvidia and AMD in the lucrative data center market. The company is launching its new AI200 and AI250 chips, scheduled for release in 2026 and 2027 respectively, as part of its ambitious plan to stake a claim in the multibillion-dollar data center industry.
The AI200 is both an individual AI accelerator and a full server rack that includes a Qualcomm CPU. The AI250, slated for 2027, is the next-generation version offering 10 times the memory bandwidth of the AI200. Qualcomm says it will maintain an annual cadence of new chip and server releases, with a third offering planned for 2028.
Qualcomm's new data center chips leverage the company's custom Hexagon neural processing unit (NPU) technology, which has been used in its Windows PC chips. The company is now scaling up this expertise for the data center market, touting the low power consumption and total cost of ownership as key benefits.
Importantly, Qualcomm's new chips are designed specifically for AI inference, the process of running existing AI models, rather than training new ones. This positioning sets them apart from the general-purpose offerings of Nvidia and AMD, which are used for both training and inference.
Customers will have the flexibility to purchase individual chips, portions of Qualcomm's server offerings, or the entire setup. Interestingly, Qualcomm sees potential for partnerships with Nvidia and AMD, even as it competes with them in the data center space.




