Registered users can unlock up to five pieces of premium content each month.
Numerous Chipsets Have Entered the HPC AI Arena, Knocking on NVIDIA's Door |
NEWS |
The majority of the market still sees NVIDIA’s top-shelf A100 and H100 Graphics Processing Units (GPUs) as the best options suited for training the most demanding frontier Large Language Models (LLMs). Despite their price, they offer huge value, as significant time is saved during the training process, accelerating time to market. In the context of mushrooming demand for Artificial Intelligence (AI), ABI Research’s Artificial Intelligence Software market data (MD-AISOFT-101) forecasts AI software revenue to grow at a Compound Annual Growth Rate (CAGR) of 27% between 2023 and 2030), NVIDIA’s leading chipsets, AI systems, and strong software proposition have helped their share price triple in the last year. However, they are increasingly seeing competitors build strong propositions in the High-Performance Computing (HPC) market:
Beyond benchmarks and announcements, tangible evidence of these chipsets’ ability to provide a workable alternative to NVIDIA GPUs is also mounting.
AI Companies Need Access to Cheaper, Performant Hardware Now |
IMPACT |
Both Stability AI and MosaicML explicitly mention lead times and costs in their Public Relations (PR) materials around their decision to choose alternatives to NVIDIA’s GPUs. These factors are key motivators, especially when addressing less demanding AI workloads (like fine-tuning, inference, or even training of small models). And now there is increasing competition addressing more demanding workloads, evidenced by Lamini and MosaicML’s decisions to deploy AMD’s solutions, and Stability AI’s preference for Intel. While impressive performance benchmarks versus NVIDIA are resonating with customers, the importance of software is also a priority. NVIDIA’s CUDA (and NCCL) frameworks have developed a “walled garden” for NVIDIA systems, but customers increasingly want to pivot toward more open solutions. In this vein, Stability AI has highlighted the importance of Intel’s software stack for “its seamless model architecture compatibility,” while MosaicML is impressed with AMD’s full-stack approach.
In a growing, highly competitive space, all these factors are important. But above all, accessibility is key, and will help challengers gain commercial success. As enterprise AI deployments continue to scale, they will need access to hardware, which may lead to certain players pivoting away from supply-constrained and more expensive NVIDIA systems to competitors like Intel or AMD.
An Opportunity to Capitalize on Vertically Integrated Solutions and Vendor Lock-in Fears |
RECOMMENDATIONS |
Considering NVIDIA’s entrenchment in the HPC market, Intel and AMD face an uphill battle. However, given the growing demand for hardware capable of running training on leading edge models, and NVIDIA’s supply constraints, they certainly have an opportunity to compete. ABI Research recommends that Intel and AMD should look to build highly differentiated propositions by focusing on the following areas:
Persistent concerns around vendor lock-in will make Intel and AMD’s open-source approach desirable (compared to NVIDIA’s CUDA-based “walled garden”). But NVIDIA should not be too worried: its AI systems still provide market-leading performance and enable simple integration; its walled garden approach retains captive developers, underscoring its value proposition for customers; and it has a very strong verticalized commercial strategy. And it is not resting on its laurels—the launch and continuous expansion of NeMo (an end-to-end platform targeting AI developers, including enterprises, for deploying generative AI), the acceleration of its hardware roadmap, including upgrades to its flagship GPUs, and the decision to increase the cadence of hardware releases from every 2 years to once per year are all evidence of activity aiming to further entrench its position. The availability of alternatives does not immediately spell trouble for NVIDIA, but this is an important space to watch, especially as Intel and AMD’s investments in AI hardware and software ecosystems mature.