NVIDIA’s proposed acquisitions of Run:ai and Deci align closely with ABI Research’s market expectations and are certainly shrewd decisions as NVIDIA looks to invest in horizontal software Intellectual Property (IP) and talent to continue to develop its Enterprise AI platform proposition. But this will certainly not be the end. Market participants must be ready for plenty more acquisitions from AI leaders (chip and software).
Registered users can unlock up to five pieces of premium content each month.
Log in or register to unlock this Insight.
NVIDIA Begins Process of Acquiring Run:ai and Deci
|
NEWS
|
News broke recently about NVIDIA’s proposed acquisitions of Israeli Artificial Intelligence (AI) software startups, Run:ai and Deci. This is a not a surprise, and aligns very closely with ABI Research’s expectations for AI hardware vendors as explored in recently published reports: Building Chipset Differentiation with AI Optimization across the Distributed Compute Continuum, Commercial and Technical Opportunities for Artificial Intelligence Optimization Platforms. AI hardware companies are increasingly looking to build differentiation through software and accelerate enterprise AI deployment to reap the benefits of increased demand for hardware. Already this year, NVIDIA announced NVIDIA Inference Microservices (NIMs) and these two acquisitions will sit nicely alongside this announcement.
Run:ai is a mature startup, founded in 2018, with a strong partner ecosystem and customer base. It has built a virtualization platform that allows users to orchestrate and optimize AI resource utilization. This uses techniques such as dynamic scheduling and fractional Graphics Processing Unit (GPU) usage to maximize AI workload efficiency running across the distributed compute continuum. Deci enables the development of compressed AI models for computer vision and generative AI workloads. Its core Intellectual Property (IP) is AutoNAC, which is a commercially viable Neural Architecture Search (NAS) tool that automates the development of optimized neural networks with hardware and data awareness.
How Can NVIDIA Best Leverage Its New Acquisitions across Its Portfolio?
|
IMPACT
|
NVIDIA’s recent acquisitions make a great deal of sense to ABI Research and align with its core strategy built around accelerating enterprise AI deployment to increase demand for AI hardware. First, both companies have built solutions targeting one of the biggest problems for enterprise AI deployment—AI efficiency. Scaling AI will bring cost, compute, talent, and energy pressures, which will increase enterprise demand for highly abstracted optimization tools and techniques. Second, these are not “immature” startups, but commercially viable companies that have built well-defined monetization models (which the majority of AI startups still lack) and established customer bases alongside their core, highly differentiated IP. Third, NVIDIA has already partnered with these Independent Software Vendors (ISVs) and has been able to assess commercial customer appetites and technical compatibility. Fourth, Deci has already partnered with NVIDIA’s competitors (Intel, Qualcomm), and by acquiring this startup, it will gain proprietary control and access to one of only very few commercially viable NAS solutions available globally. Fifth, these acquisitions bring technological IP and talent that can be applied horizontally across various business groups within NVIDIA. Below, ABI Research explores NVIDIA’s business groups that may be impacted by these acquisitions:
- Cloud & Data Center (NeMo): Run:ai can help enterprise customers optimize GPU resource usage within the NVIDIA platform. By integrating this directly within the NeMo framework, NVIDIA can further bolster its Enterprise AI offering and extend the platform to enable resource efficiency. NIMs provide inference optimization techniques, while Run:ai could expand this to optimize infrastructure utilization. Deci could fit nicely within the enterprise AI platform and empower enterprises to start building their own optimized neural networks without relying on third-party models.
- Edge & Vision (Metropolis): Deci is already part of the Metropolis partner program, providing compressed AI vision models optimized for the NVIDIA Jetson Edge AI Platform. By bringing Deci in-house, it can start building more application-specific, optimized models to further accelerate developer Time to Value (TTV). Deci’s technology IP (AutoNAC) has been commercially grounded in computer vision, but has shown market-leading support for generative AI models.
- Device (GeForce RTX): Although not part of NVIDIA’s core business, hype has pushed NVIDIA into the on-device AI market using its technological heritage in gaming GPUs. Deci’s tooling and model building capabilities could augment NVIDIA model repository, enabling the accelerated development of applications of NVIDIA Personal Computers (PCs).
- NGC Model Store: NVIDIA has already developed hundreds of models, but Deci will enhance this process by bringing market-leading model developers and NAS IP to NVIDIA. This will help accelerate pre-trained model zoos for different use cases and enable enterprises to start building their own optimized, proprietary AI models. Expect commercial opportunities to emerge through the combination of Deci’s NAS IP with the scale of NVIDIA’s customer relationships.
Further Acquisitions Will Target Other AI Pain Points with MLOps
|
RECOMMENDATIONS
|
This is neither the start, nor the end of startup acquisitions from NVIDIA or other leading AI companies. Apple is certainly another to watch closely as it continues to build internal AI IP (especially for on-device) and talent through startup acquisitions. For AI leaders, the next couple of years present significant opportunities to acquire startups as investments from Venture Capitals (VCs) will eventually dry up, forcing founders to look for exit strategies.
Although some hardware consolidation will occur, especially incumbent chip vendors acquiring RISC-V IP leaders, the majority of activity will focus on software, as AI leaders look to quickly build differentiation within their enterprise AI proposition. This will be spread across numerous areas covering generative AI, as well as computer vision/predictive AI. ABI Research expects startups operating in the following software domains to be highly sought after, as they are solving some of the key pain points in enterprise AI deployment:
- Optimization: NVIDIA’s acquisitions will not be the end. AI optimization will still be the hottest topic as cost, energy, and memory will remain massive barriers to deployment. The majority of the enterprise market still relies on complex, open-source tooling, which requires AI expertise to effectively utilize it, so it does not align with market expectations of ubiquitous enterprise AI deployment. On top of this, generative AI brings further complexities due to the intricacies of neural networks, which make traditional optimization techniques less effective. AI leaders will need to acquire IP and talent in the optimization space to overcome these issues. Although NAS remains commercially challenging, leading vendors like Eta Compute will certainly be sought after given the potential of NAS to accelerate enterprise AI deployment.
- Automated Machine Learning (AutoML): Deploying enterprise AI at scale requires greater automation across the entire Machine Learning Operations (MLOps) lifecycle. Most chip vendor platforms like Tiber and NVIDIA’s Enterprise AI still rely on manual processes and developer expertise. Bringing in automation will help bridge talent gaps and accelerate scaled enterprise deployments across AI frameworks. Companies like H20 that have built platforms around automation may be of interest to help augment chip vendor enterprise AI platforms. Acquisition or in-house development will be the only options for AI leaders looking to embed automation within their enterprise AI platform.
- Data Operations (DataOps): Enterprise AI still relies on integration with enterprise data repositories or third-party data platform providers; however, this introduces friction to the MLOps process. Increasingly, companies like WEKA will be acquired by AI leaders looking to smooth the data process and accelerate AI deployment. It will also provide enterprises with access to data services, including pre-curated datasets and synthetic data tools.
- Regulation and Governance: National and vertical regulation is being developed, creating significant risk for enterprise AI deployment. Startups that provide tools to track and change AI models to fit regional regulation will play a large role in enabling enterprise AI at scale. These tools will be especially important for multinational corporations operating and scaling AI across multiple regions.
- Transparency: Even open-source models lack transparency around how inputs translate into output. Software enabling model explainability and interpretability will help build trust with enterprises and subsequently trigger “quicker” adoption. Being able to track exact model thought processes to understand hallucinations will be necessary, especially for enterprise generative AI adoption.