Original Link: https://www.anandtech.com/show/11243/apple-developing-custom-gpu-dropping-imagination
Apple To Develop Own GPU, Drop Imagination's GPUs From SoCs
by Ryan Smith on April 3, 2017 6:30 AM ESTWe typically don’t write about what hardware vendors aren’t going to be doing, but then most things hardware vendors don’t do are internal and never make it to the public eye. However when those things do make it to the public eye, then they are often a big deal, and today’s press release from Imagination is especially so.
In a bombshell of a press release issued this morning, Imagination has announced that Apple has informed their long-time GPU partner that they will be winding down their use of Imagination’s IP. Specifically, Apple expects that they will no longer be using Imagination’s IP for new products in 15 to 24 months. Furthermore the GPU design that replaces Imagination’s designs will be, according to Imagination, “a separate, independent graphics design.” In other words, Apple is developing their own GPU, and when that is ready, they will be dropping Imagination’s GPU designs entirely.
This alone would be big news, however the story doesn’t stop there. As Apple’s long-time GPU partner and the provider for the basis of all of Apple’s SoCs going back to the very first iPhone, Imagination is also making a case to investors (and the public) that while Apple may be dropping Imagination’s GPU designs for a custom design, that Apple can’t develop a new GPU in isolation – that any GPU developed by the company would still infringe on some of Imagination’s IP. As a result the company is continuing to sit down with Apple and discuss alternative licensing arrangements, with the intent of defending their IP rights. Put another way, while any Apple-developed GPU will contain a whole lot less of Imagination’s IP than the current designs, Imagination believes that they will still have elements based on Imagination’s IP, and as a result Apple would need to make lesser royalty payments to Imagination for devices using the new GPU.
An Apple-Developed GPU?
From a consumer/enthusiast perspective, the big change here is of course that Apple is going their own way in developing GPUs. It’s no secret that the company has been stocking up on GPU engineers, and from a cost perspective money may as well be no object for the most valuable company in the world. However this is the first confirmation that Apple has been putting their significant resources towards the development of a new GPU. Previous to this, what little we knew of Apple’s development process was that they were taking a sort of hybrid approach in GPU development, designing GPUs based on Imagination’s core architecture, but increasingly divergent/customized from Imagination’s own designs. The resulting GPUs weren’t just stock Imagination designs – and this is why we’ve stopped naming them as such – but to the best of our knowledge, they also weren’t new designs built from the ground up.
What’s interesting about this, besides confirming something I’ve long suspected (what else are you going to do with that many GPU engineers?), is that Apple’s trajectory on the GPU side very closely follows their trajectory on the CPU side. In the case of Apple’s CPUs, they first used more-or-less stock ARM CPU cores, started tweaking the layout with the A-series SoCs, began developing their own CPU core with Swift (A6), and then dropped the hammer with Cyclone (A7). On the GPU side the path is much the same; after tweaking Imagination’s designs, Apple is now to the Swift portion of the program, developing their own GPU.
What this could amount to for Apple and their products could be immense, or it could be little more than a footnote in the history of Apple’s SoC designs. Will Apple develop a conventional GPU design? Will they try for something more radical? Will they build bigger discrete GPUs for their Mac products? On all of this, only time will tell.
Apple A10 SoC Die Shot (Courtesy TechInsights)
However, and these are words I may end up eating in 2018/2019, I would be very surprised if an Apple-developed GPU has the same market-shattering impact that their Cyclone CPU did. In the GPU space some designs are stronger than others, but there is A) no “common” GPU design like there was with ARM Cortex CPUs, and B) there isn’t an immediate and obvious problem with current GPUs that needs to be solved. What spurred the development of Cyclone and other Apple high-performance CPUs was that no one was making what Apple really wanted: an Intel Core-like CPU design for SoCs. Apple needed something bigger and more powerful than anyone else could offer, and they wanted to go in a direction that ARM was not by pursuing deep out-of-order execution and a wide issue width.
On the GPU side, however, GPUs are far more scalable. If Apple needs a more powerful GPU, Imagination’s IP can scale from a single cluster up to 16, and the forthcoming Furian can go even higher. And to be clear, unlike CPUs, adding more cores/clusters does help across the board, which is why NVIDIA is able to put the Pascal architecture in everything from a 250-watt card to an SoC. So whatever is driving Apple’s decision, it’s not just about raw performance.
What is still left on the table is efficiency – both area and power – and cost. Apple may be going this route because they believe they can develop a more efficient GPU internally than they can following Imagination’s GPU architectures, which would be interesting to see as, to date, Imagination’s Rogue designs have done very well inside of Apple’s SoCs. Alternatively, Apple may just be tired of paying Imagination $75M+ a year in royalties, and wants to bring that spending in-house. But no matter what, all eyes will be on how Apple promotes their GPUs and their performance later this year.
Speaking of which, the timetable Imagination offers is quite interesting. According to Imaginations press release, they have told the company that they will no longer be using Imagination’s IP for new products in 15 to 24 months. As Imagination is an IP company, this is a critical distinction: this doesn’t mean that Apple is going to launch their new GPU in 15 to 24 months, it’s that they’re going to be done rolling out new products using Imagination’s IP altogether within the next 2 years.
Apple SoC History | ||||
First Product | Discontinued | |||
A7 | iPhone 5s (2013) |
iPad Mini 2 (2017) |
||
A8 | iPhone 6 (2014) |
Still In Use: iPad Mini 4, iPod Touch |
||
A9 | iPhone 6s (2015) |
Still In Use: iPad, iPhone SE |
||
A10 | iPhone 7 (2016) |
Still In Use |
And that, in turn, means that Apple’s new GPU could be launching sooner rather than later. I hesitate to read too much into this because there are so many other variables at play, but the obvious question is what this means for the the (presumed) A11 SoC in this fall’s iPhone. Apple has tended to sell most of their SoCs for a few years – trickling down from iPhone and high-end iPad to their entry-level equivalents – so it could be that Apple needs to launch their new GPU in A11 in order to have it trickle-down to lower-end products inside that 15 to 24 month window. On the other hand, Apple could go with Imagination in A11, and then just avoid doing trickle-down, using new SoC designs for entry-level devices instead. The only thing that’s safe to say right now is that with this revelation, an Imagination GPU design is no longer a lock on A11 – anything is going to be possible.
But no matter what, this does make it very clear that Apple has passed on Imagination’s next-generation Furian GPU architecture. Furian won’t be ready in time for A11, and anything after that is guaranteed to be part of Apple’s GPU transition. So Rogue will be the final Imagination GPU architecture that Apple uses.
Imagination: Patents & Losing an Essential Contract
As for Imagination, the news is undoubtedly grim, but not necessarily fatal. Imagination has never hidden the fact that Apple is their most important customer – even labeling them as an “Essential Contract” in their annual report – so it’s no secret that if Apple were to leave Imagination, it would be painful.
By the numbers, Apple’s GPU licensing and royalties accounted for £60.7M in revenue for Imagination’s most recent reporting year, which ran May 1st, 2015 to April 30th, 2016. The problem for Imagination is that this was fully half of their revenue for that reporting year; the company only booked £120M to begin with. And if you dive into the numbers, Apple is 69% of Imagination’s GPU revenue. Consequently, by being dropped by Apple, Imagination has lost the bulk of their GPU revenue starting two years down the line.
Imagination Financials: May 1st, 2015 to April 30, 2016 | |||
Company Total | GPUs Total | Apple | |
Revenue (Continuing) | £120M | £87.9M | £60.7M |
Operating Income | -£61.5M | £54.7M |
The double-whammy for Imagination is that as an IP licensor, the costs to the company of a single customer is virtually nil. Imagination still has to engage in R&D and develop their GPU architecture and designs regardless. Any additional customer is pure profit. But at the same time, losing a customer means that those losses directly hit those same profits. For the 2015/2016 reporting year, Apple’s royalty & licensing payments to Imagination were greater than the profits their PowerVR GPU division generated for the year. Apple is just that large of a customer.
As a result, Imagination is being placed in a perilous position by losing such a large source of revenue. The good news for the company is that their stakes appear to be improving – if slowly – and that they have been picking up more business from other SoC vendors. The problem for Imagination is that they’ll need a drastic uptick in customers by the time Apple’s payments end in order to pay the bills, never mind turning a profit. Growing their business alone may not be enough.
Which is why Imagination’s press release and the strategy it’s outlining is so important. The purpose of Imagination’s release isn’t to tell the world that Apple is developing a new GPU, but to outline to investors and others how the company intends to proceed. And that path is on continued negotiations with Apple to secure a lesser revenue stream.
The crux of Imagination’s argument is that it’s impractical for Apple to develop a completely clean GPU devoid of any of Imagination’s IP, and this is for a few reasons. The most obvious reason is that Apple already knows how Imagination’s GPUs work, and even though Apple wouldn’t be developing a bit-for-bit compatible GPU – thankfully for Apple, the code app developers write for GPUs operates at a higher level and generally isn’t tied to Imagination’s architecture – those engineers have confidential information about those GPUs that they may carry forward. Meanwhile on the more practical side of matters, Imagination has a significant number of GPU patents (they’ve been at this for over 20 years), so developing a GPU that doesn’t infringe on those patents would be difficult to do, especially in the mobile space. Apple couldn’t implement Imagination’s Tile Based Deferred Rendering technique, for example, which has been the heart and soul of their GPU designs.
However regardless of the architecture used and how it’s designed, the more immediate problem for Apple – and the reason that Imagination is likely right, to an extent – is replicating all of the features available in Imagination’s GPUs. Because Apple’s SoCs have always used GPUs from the same vendor, certain vendor-specific features like PowerVR Texture Compression (PVRTC) are widely used in iOS app development, and Apple has long recommended that developers use that format. For their part, Apple is already in the process of digging themselves out of that hole by adding support for the open ASTC format to their texture compression tools, but the problem remains of what to do with existing apps and games. If Apple wants to ensure backwards compatibility, then they need to support PVRTC in some fashion (even if it’s just converting the textures ahead of time). And this still doesn’t account for any other Imagination-patented features that have become canonized into iOS over time.
Consequently, for Imagination their best move is to get Apple to agree to patent indemnification or some other form of licensing with their new GPU. For Apple it would ensure that nothing they do violates an Imagination patent, and for Imagination it would secure them at least a limited revenue stream from Apple. Otherwise Imagination would be in a very tight spot, and Apple would face the risk of patent lawsuits (though Imagination isn’t making transparent threats, at least not yet).
The Future: Competition, Secrecy, & the Unexpected
Finally, while Apple developing their own GPU is not unexpected given their interests and resources, the ramifications of it may very well be. There hasn’t been a new, major GPU vendor in almost a decade – technically Qualcomm’s team would count as the youngest, though it’s a spin-off of what’s now AMD’s Radeon Technologies Group – and in fact like the overall SoC market itself, the market for GPU vendors has been contracting as costs go up and SoC designers settle around fewer, more powerful GPU vendors. So for someone as flush with cash as Apple to join the GPU race is a very big deal; just by virtue of starting development of their own GPU, they are now the richest GPU designer.
Of course, once they start shipping their custom GPU, this will also open them up to patent challenges from those other players. While it has largely been on the backburner of public attention, this decade has seen a few GPU vendors take SoC vendors to court. This includes NVIDIA with Samsung and Qualcomm (a case that they lost), and still ongoing is AMD’s case against LG/MediaTek/Sigma/Vizio.
GPU development is a lot more competitive due to the fact that developers and compiled programs aren’t tied to a specific architecture – the abstraction of the APIs insulates against individual architectures – however it also means that there a lot of companies developing novel technologies, and all of those companies are moving in the same general direction with their designs. This potentially makes it very difficult to develop an efficient GPU, as the best means of achieving that efficiency have often already been patented.
What exists then is an uneasy balance between GPU vendors, and a whole lot of secrets. AMD and NVIDIA keep each other in check with their significant patent holdings, Intel licenses NVIDIA patents, etc. And on the flip side of the coin, some vendors like Qualcomm simply don’t talk about their GPUs, and while this has never been stated by the company, the running assumption has long been that they don’t want to expose themselves to patent suits. So as the new kid on the block, Apple is walking straight into a potential legal quagmire.
Unfortunately, I suspect this means that we’ll be lucky to get any kind of technical details out of Apple on how their GPUs work. They can’t fully hide how their CPUs work due to how program compilation works (which is why we know as much as we do), but the abstraction provided by graphics APIs makes it very easy to hide the inner-workings of a GPU and make it a black box. Even when we know how something works, features and implementation details can be hidden right under our noses.
Ultimately today’s press release is a bit bitter-sweet for all involved in the industry. On the one hand it absolutely puts Imagination, a long-time GPU developer, on the back foot. Which is not to spell doom and gloom, but the company will have to work very hard to make up for losing Apple. On the other hand, with a new competitor in the GPU space – albeit one we’ve been expecting – it’s a sign that things are about to get very interesting. If nothing else, Apple enjoys throwing curveballs, so expect the unexpected.