On October 27, BBC learned that Qualcomm Inc. announced the launch of its rack-scale artificial intelligence accelerators—the Qualcomm AI200 and AI250 chips and rack products—directly challenging the current industry leader, NVIDIA.
Following the announcement, Qualcomm’s stock price surged immediately after Monday’s opening, with intraday gains once exceeding 20%. Although the gains narrowed by the close, it still ended up about 11%.
Both products launched by Qualcomm utilize its self-developed Neural Processing Unit (NPU) technology, emphasizing high memory capacity and cost-effectiveness. Among them, the AI200, scheduled for market release in 2026, supports up to 768GB of LPDDR memory per card. In comparison, NVIDIA’s latest GB300 is equipped with only 288GB of HBM3e memory per GPU.
However, Qualcomm emphasized that the AI200 chip primarily focuses on AI inference (running AI models) rather than model training.
The AI250 solution, planned for release in 2027, will debut a new memory architecture based on “near-memory computing.” Qualcomm claims this architecture achieves more than a 10-fold improvement in effective memory bandwidth while significantly reducing power consumption. Both rack solutions adopt direct liquid cooling, support vertical scaling via PCIe, and horizontal scaling via Ethernet, with a single rack power consumption of 160 kilowatts.
These two AI chips mark a significant step in Qualcomm’s accelerated strategic transformation. Long focused on wireless connectivity and mobile device semiconductors, Qualcomm has had limited presence in the large-scale data center chip market. The introduction of the AI200 and AI250 represents a critical move into the data center AI chip business.
Durga Malladi, Senior Vice President and General Manager of Technology Planning, Edge Solutions, and Data Center at Qualcomm, stated, “With Qualcomm AI200 and AI250, we are redefining what’s possible for rack-scale AI inference.”
Notably, Qualcomm also announced on the same day that it would follow the product release cadence of NVIDIA and AMD by launching a new computing chip annually. However, a Qualcomm spokesperson declined to disclose specific prices for the chips or products, as well as foundry information. Previous reports suggested that both Samsung and TSMC are among Qualcomm’s manufacturing partners.
According to Qualcomm’s financial reports, its revenue is primarily composed of its semiconductor business (QCT) and technology licensing business (QTL). In the third fiscal quarter of 2025, QCT revenue was $8.993 billion, a year-on-year increase of 11%, accounting for approximately 86.77% of total revenue. Within this, handset chip revenue was $6.328 billion, up 7% year-on-year, making it the largest revenue source in the QCT business. However, the automotive chip and IoT chip businesses both achieved double-digit growth. With the launch of its AI chips, Qualcomm clearly hopes they will become a new revenue growth driver.
Currently, NVIDIA still dominates the AI chip market, but several tech companies, including Qualcomm, such as OpenAI, Google, and Amazon, have been striving to develop their own chips to reduce dependence on NVIDIA. This also means that Qualcomm will face increasingly fierce competition in this field.