U.K.-based semiconductor Intellectual Property (IP) house Imagination Technologies recently made the headlines for securing a US$100 million investment to bolster the development of its edge Artificial Intelligence (AI) portfolio. The company has an illustrious legacy, once claiming the top position in Graphics Processing Unit (GPU) innovation, ahead of NVIDIA. In the early days of mobile platforms, Imagination’s footprint was immense: PowerVR IP was in a range of Apple products, including the first iPhone in 2007, MediaTek Systems-on-Chip (SoCs), gaming consoles, automotive systems, and more. By 2008, PowerVR graphics had been shipped in more than 100 million consumer devices, reaching 1 billion by 2013.
This success came to a grinding halt in 2017 when Apple—responsible for around half of its revenue—terminated its licensing arrangement, which initiated a period of ferment. It was snapped up by Chinese private equity shortly after, giving the story a geopolitical angle (the relationship with Apple was restored in early 2020, although likely smaller in commercial value). The final piece is last November’s company-wide 30% staff cull, blamed on export restrictions to China, and which probably factored into the need for the recent cash injection.
Before the financing deal, rumors circulated about a possible return to public markets, which would appear familiar, with British IP house Arm’s partial return to public markets 1 year ago. But where Imagination differs from Arm—and NVIDIA—is the missed AI opportunity. When NVIDIA began to address AI with its GPU portfolio, Imagination remained focused on graphics—two markets with now clearly divergent paths. Nonetheless, Imagination will now refocus its AI strategy around its GPU portfolio, which is a well-trodden path over at NVIDIA.
Software First and a Refreshed AI Strategy: GPUs over Fragmented ASICs
The recent financial injection comes with a renewed focus targeting AI workloads via its established GPU IP, which marks a pivot away from a dedicated Application-Specific Integrated Circuit (ASIC) for AI. The strategic decision here is to utilize mature GPU software stacks, which are broader in remit, rather than the fragmented, more narrowly optimized ASIC ecosystem. However, Imagination is not new to using its GPU portfolio to target AI workloads—the company talked up the applicability of its PowerVR IP for Convolutional Neural Networks (CNNs) as early as 2017—and investment in new compute libraries to achieve higher GPU utilization for AI workloads is part of today’s roadmap.
This strategy is complemented by Imagination’s leading role in the Unified Acceleration Foundation (UXL) consortium, seeking to dislodge NVIDIA’s CUDA from its monopolistic grip on the AI software developer community. UXL aims to create an open standard, cross-platform, multi-architecture accelerator programming model for application development, based on an evolution of Intel’s oneAPI initiative—a stalwart of open ecosystems. Buy-in is positive, and members include Arm, Fujitsu, Google, and Qualcomm. By easing software portability from CUDA, UXL hopes to build developers a bridge to cross NVIDIA’s moat and allow AI applications to run across heterogenous platforms, including multiple dedicated AI accelerators.
Apropos Open Ecosystems: RISC-V CPUs to Boot
Alongside the focus on AI GPUs, Imagination has invested heavily in RISC-V IP, including the recent release of a RISC-V application processor with AI capabilities targeting consumer and industrial devices. For example, Imagination’s RISC-V IP can be found in Alibaba’s AIoT SoCs, and the company announced an edge AI course in partnership with Spanish and Chinese universities to promote the engineering talent needed to build RISC-V-based SoCs. This partnership is congruent with the long-established objective in China to promote semiconductor independence from the West, and Imagination’s longstanding commercial relationship with China. RISC-V CPUs will form an integral part of Imagination’s edge AI roadmap going forward, addressing open standard, general-purpose compute, alongside the acceleration capabilities of its GPU portfolio, in SoCs and other packages.
Applying General Computing Principles to Edge AI
Imagination’s two-pronged approach to edge AI is the development of open-standards software (e.g., through UXL), and accelerated computing systems (i.e., GPUs) to address diverse AI workloads into the future, thereby positioning itself as a legitimate competitor to NVIDIA in this space. The company is promoting the notion that edge AI will benefit from the same computing principles that have been applied to large-scale cloud deployments, namely the utilization of scalable, accelerated computing methods across AI frameworks. This will allow open-standard software performance to scale as the computation performance density of its GPUs increases.
Zooming into the hardware side, the core tenet of Imagination’s strategy is the focus on programmability, acceleration, and flexibility. This fundamentally sets it apart from other edge AI players ploughing resources into domain-specific, narrowly optimized ASICs, like Neural Processing Units (NPUs), for edge AI. This contrast is exemplified by:
- CEVA with an extensive NPU and Digital Signal Processing (DSP) portfolio
- Edge AI-focused ASIC vendors DEEPX, Hailo, SiMa, Ambarella
- The NPU arms race between AI Personal Computer (PC) chipset vendors AMD, Qualcomm, and Intel, and captive vendor Apple.
- Smartphone Generative Artificial Intelligence (Gen AI)-enabled by NPUs in MediaTek, Qualcomm, and captive vendors Google and Apple’s SoCs.
Thus, Imagination ventures into the promotion of GPU architectures in edge AI use cases and form factors where NPUs have become increasingly popular in addressing AI workloads. This includes mobile, client, automotive, and consumer electronics devices like wearables, where the small size and energy footprint of NPUs has long been touted as essential for deploying on-device AI. On the other hand, by going down the route of more flexible GPUs, Imagination is also less exposed to the very real possibility that a new AI model will emerge that is unsuited to today’s NPUs, which have already had to adjust to serve the emerging transformers of the Gen AI era.
Openness & Flexibility—Will the Bet Pay Off?
The strategy to apply the more open and flexible compute successes of the cloud to edge AI, and the focus on RISC-V, may come up against several countervailing forces and issues. But this is not Imagination’s first challenge, and the company is one of a handful of surviving GPU players from the 1990s, which is no mean feat.
- There is heavy investment by incumbent AI chipset vendors in heterogenous AI platforms with comprehensive Software Development Kits (SDKs)—including partnerships with Independent Software Vendors (ISVs) developing the applications to run on their platforms such as Intel’s AI PC Acceleration Program. Examples include Qualcomm’s AI Hub, and, most importantly, NVIDIA’s CUDA. If Imagination is to service these addressable markets, the alternative approach and the omission of the NPU will require new efforts by developers to optimize for a different GPU and RISC-V-based platform.
- A market education initiative will be needed to promote Imagination’s new approach, create buy-in, and prove its viability and longevity over the narrower applicability of NPU optimization.
- Progress in key open initiative such as UXL is key if Imagination is to tempt developers away from, e.g., NVIDIA’s DRIVE platform for automotive applications. This also applies to any server-based compute, should Imagination’s updated AI strategy expand to data center form factors.
- Performance progress, and a strong roadmap, will need to be seen in Imagination’s GPU portfolio to address the energy efficiency pitfalls of deploying diverse edge AI applications on GPUs. This particularly applies to always-on use cases, such as wake word detection, where NPUs have excelled to date.
- The use of RISC-V CPUs over Arm and x86 architectures, which may be favored by Chinese customers, has yet to gain the same level of traction in Western markets. Moreover, regulatory restrictions may prohibit the sale of RISC-V IP to China for use in AI, which has the potential to substantially knock revenue. The development of the (AI) RISC-V developer community in Western markets is, therefore, key to securing future revenue from the architecture, alongside its GPU IP, in edge AI SoCs.
- Catching up with NVIDIA, which has invested long years and billions of dollars in adapting and optimizing its GPUs to accommodate various AI models, domains, and use cases, will be a challenge. NVIDIA has hundreds (possibly thousands) of Domain Specific Libraries (DSLs) to enable its GPUs to accelerate various AI and high-performance workloads. This effort requires years-long partnerships with key stakeholders. For example, it took NVIDIA about 7 years of collaboration with OpenAI to optimize its GPUs to train ChatGPT.
About the Author
Paul Schell, Industry Analyst
Paul Schell, Industry Analyst at ABI Research, is responsible for research focusing on Artificial Intelligence (AI) hardware and chipsets with the AI & Machine Learning Research Service, which sits within the Strategic Technologies team. The burgeoning activity around AI means his research covers both established players and startups developing products optimized for AI workloads.