Looks like the future of Apple is ARM, even for their laptops and PCs. With computing relying less and less on CPUs and more on dedicated processing units, the in-house GPU thing makes sense.
This is more about reducing cost and turning the personal computing industry, as we know it, into an appliance-like industry. Instead of making a do-it-all device, it's more profitable designing and selling dedicated hardware for dedicated/specific tasks. Pretty darn obvious if you ask me.
Surely this is entirely different from Apple's current CPU situation. They are still paying ARM an Arch license are they not ? Whereas IMG is saying Apple's view is that they won't be paying for anything from IMG.
Also the statement says Apple won't be using them in NEW products in 15-24 months. That must surely mean new designs. By inference that means IMG should still be in any new products launched in the next 15 ish months. Reading between the lines it looks like Sept 2018 is being targetted.
Thing is, Arm is already Apple originated, being funded by Apple for their Newton.
But, given the rumors of Apple buying Toshiba's NAND flash fabs, it seems more likely that Apple is going all in on in-house manufacturing and development of everything, including ISA and fabs.
that's kind of an understatement. the logic of the ALU, for instance, has been known for decades. ain't no one suggested an alternative. back in the good old days of IBM and the Seven Dwarves, there were different architectures (if one counts the RCA un-licenced 360 clone as "different") which amounted to stack vs. register vs. direct memory. not to mention all of the various mini designs from the Departed. logic is a universal thing, like maths: eventually, there's only one best way to do X. thus, the evil of patents on ideas.
The underlying design and the ISA don't have to be tightly coupled. Look at modern x86, they don't look much like oldschool CISC designs. If they're using a completely in-house design, there's no reason they couldn't start transitioning to MIPS64 or whatever at some point.
Anyway I'm sad to see Apple transitioning away from PowerVR designs. That was the main reason their GPUs were always good. Now there might not be a high-volume product with a Furian GPU. :(
-- Look at modern x86, they don't look much like oldschool CISC designs.
don't conflate the RISC-on-the-hardware implementation with the ISA. except for 64 bit and some very CISC extended instructions, current Intel cpu isn't RISC or anything else but CISC to the C-level coder.
I think it's perhaps too soon to analyze THAT possibility (apple-specific ISA). Before that, we need to see how the GPU plays out. Specifically:
The various childish arguments being put forth about this are obviously a waste of time. This is not about Apple saving 30c per chip, and it's not about some ridiculous Apple plot to do something nefarious. What this IS about, is the same thing as the A4 and A5, then the custom cores --- not exactly *control* so much as Apple having a certain vision and desire for where they want to go, and a willingness to pay for that, whereas their partners are unwilling to be that ambitious.
So what would ambition in the space of GPUs look like? A number of (not necessarily incompatible) possibilities spring to mind. One possibility is much tighter integration between the CPU and the GPU. Obviously computation can be shipped from the CPU to the GPU today, but it's slower than it should be because of getting the OS involved, having to copy data a long distance (even if HSA provides a common memory map and coherency). A model of the GPU as something like a sea of small, latency tolerant, AArch64 cores (ie the Larrabee model) is an interesting option. Obviously Intel could not make that work, but does that mean that the model is bad, that Intel is incompetent, that OpenGL (but not Metal) was a horrible target, that back then transistors weren't yet small enough?
With such a model Apple starts to play in a very different space, essentially offering not a CPU and a GPU but latency cores (the "CPU" cores, high power and low power) and throughput cores (the sea of small cores). This sort of model allows for moving code from one type of core to another as rapidly as code moves from one CPU to another on a multi-core SoC. It also allows for the latency tolerant core to perhaps be more general purpose than current GPUs, and so able to act as more generic "accelerators" (neuro, crypto, compression --- though perhaps dedicated HW remains a better choice for those?)
Point is, by seeing how Apple structure their GPU, we get a feeling for how large scale their ambitions are. Because if their goal is just to create a really good "standard" OoO CPU, plus standard GPU, then AArch64 is really about as good as it gets. I see absolutely nothing in RISC-V (or any other competitor) that justifies a switch.
But suppose they are willing to go way beyond a "standard" ISA? Possibilities could be VLIW done right (different model for the split between compiler and HW as to who tracks which dependencies) or use of relative rather than absolute register IDs (ie something like the Mill's "belt" concept). In THAT case a new instruction set would obviously be necessary.
I guess we don't need to start thinking about this until Apple makes bitcode submission mandatory for all App store submissions --- and we're not even yet at banning 32-bit code completely, so that'll be a few years. Before then, just how radical Apple are in their GPU design (ie apparently standard GPU vs sea of latency tolerant AArch-64-lite cores) will tell us something about how radical their longterm plans are.
And remember always, of course, this is NOT just about phones. Don't you think Apple desktop is as pissed off with the slow pace and lack of innovation of Intel? Don't you think their data-center guys are well aware of all that experimentation inside Google and MS with FPGAs and alternative cores and are designing their own optimized SoCs? At least one reason to bypass IMG is if IMG's architecture maxes out at a kick-ass iPad, whereas Apple wants an on-SoC GPU that, for some of their chips at least, is appropriate to a new ARM-based iMac 5K and Mac Pro.
What name99 said. Which is awfully like what Qualcomm is doing, isn't it? A bunch of conceptually-different processor designs in one 'platform'. Software uses whichever is most appropriate.
Very unlikely. They gave up that chance a couple years ago (ironically, to imagination).
Consider, it takes 4-5 years from initial architecture design to final shipment. No company is immune to this timeframe no matter how large. Even more time is required for new ISAs because there are new, unexpected edge cases that occur.
Consider, ARM took about 4 years to ship from the time the ISA was announced. Most of their licensees took closer to 5 years. Apple took a bit less than 2 years.
Consider, Apple was a front-runner to buy MIPS so they could have their own ISA, but they backed out instead. The new ARM ISA is quite similar to MIPS64.
Thought, Apple started designing a uarch that could work well with either MIPS or their new ARMv8. A couple years in (about the time nailing down the architecture would start to become unavoidable), they show ARM a proposal for a new ISA and recommend ARM adopt that ISA otherwise they buy MIPS and switch. ARM announces a new ISA and immediately has teams start working on it, but Apple has a couple year head start. Apple won big because they shipped an extremely fast CPU a full two years before their competitors and even more years for their competitors to catch up.
Maybe imperfect, but its the best explanation I can come up with for how events occurred.
Computing still heavily relies on CPU for all the things that matter to power users. ARM is long way away from being powerful enough to actually be useful for power users and creators. It is good enough for consumption and getting better.
But then again, Apple hasnt been doing well catering to creators anyway. Still no refresh for Mac Pro. So you might be right. But that means Apple is Ok with ignoring that segment, which they probably are.
Single purpose equipment aren't mainly CPU dependent. This is my point. Relying on the CPU for general purpose functionality is inherently the least efficient, especially for consumer workloads.
Outside the consumer market, for example engineering and video production software, are still very CPU dependent because the software isn't written efficiently. It's so for the sole purpose of supporting the most amount of currently available hardware. I'd argue that if a CAD program was re-written from the ground up to be hardware dependent and GPU accelerated ONLY, then it would run faster and more fluidly on an iPad than on a Core i7 with integrated graphics, if the storage speed was the same.
This leaves only niche applications that are inherently dependent on a CPU, and can't be offloaded to hardware accelerates. With more work on efficient multi-threaded coding, Apple's own CPU cores, in a quad/octa configuration, can arguably suffice. Single-threaded applications are also arguably good enough, even on A72/A73 cores.
Again, this conversation is about consumer/prosumer workloads. It's evident that Apple isn't interested in Server/corporate workloads.
This has been Apple's vision since inception. They want to do everything in-house as a single package for a single purpose. They couldn't in the past, and almost went bankrupt, because they weren't _big enough_. This isn't the case now.
The future doesn't look very bright for the personal computing industry as we know it. There has been talk and rumors that Samsung is taking a VERY similar approach. Rumors started hitting 2 years ago that they were also building their own in-house GPU, and are clashing with Nvidia and AMD graphics IP in the process. It also lead Nvidia to sue Samsung for reasons only known behind the scenes.
Yeah let's make the x chip that can only do one n task. And while we are at it, why not scrap all those cars you can drive anywhere there is an open road, and make cars which are best suited for one purpose. You need a special car to do groceries, a special car to go to work, a bunch of different special cars when you go on vacation, depending on what kind of vacation it is.
Implying prosumer software isn't properly written is laughable, and a clear indication you don't have a clue. That's the only kind of software that scales well with cores, and can scale linearly to as many threads as you have available.
But I'd have to agree that crapple doesn't really care about making usable hardware, their thing is next-to-useless toys, because it doesn't matter how capable your hardware is, what matters is how much of that lacking and desperately needed self esteem you get from buying their branded, overpriced toy.
Back in the days apple did good products and struggled to make profit, until genius Jobs realized how profitable it can be to exploit dummies and make them even dummer, giving rise to crapple, and the shining beacon of an example that the rest of the industry is taking, effectively ruining technology and reducing it to a fraction of its potential.
Chill bro. I said current software is written to support the most amount of hardware combinations possible. And yes, that's NOT the most efficient way to write software, but it _is_ the most accessible for consumers.
I wasn't implying that one way is better than the other. But it's also true that a single $200 GPU runs circles around $1500 10 core Intel CPU in rendering.
How amazing that an "overpriced toy" still shames all Android manufacturers in single thread performance. The brand new S8 (with a price increase, no-less) can't even beat a nearly 2 year old iPhone 6S.
You forgot the second part where Android SoC's shame Apple's SoC in multi-core performance. If you're going to selectively bring up single core then you should have also mentioned multi-core.
Yeah, and if only phones were all about single threaded performance, or even performance in general. I still run an ancient note 3 and despite being much slower than current flagship devices, it is still perfectly usable for me, and I do with it more things than you do on a desktop PC.
You're looking at it from the wrong angle. The numbers speak for themselves and it all comes down to how much a company can spend on R&D. Plus how important components like GPU's have become to computing in general. With such small annual revenue, how much can IMG spent on R&D ? 10 million ? 20 ? Apple can easily spend 10-20 times that amount and not even feel a scratch. Everything being equal, how much you put into something is how much you're getting out. You want a top of the line product ? Well, it's going to cost you. If Apple is to stay at the top, their GPU's need to be on the same level as their CPU's. Plus, GPU's these days are used for all kinds of other things other than graphics. Look how lucrative the automotive business is for Nvidia not to mention GPU based servers. As for litigation and patents, gimme a break. What, just because a company bought a GPU from another for a long time, now they are supposed to do it forever ? When Apple started to buy them from IMG 10-15 years ago, it made sense at the time. Now, the market is different and so are the needs. Time to move on. When a company doesn't spend much on R&D, they get accused of complacency. When they want to spend a lot, now they get threatened with lawsuits. How is that for hypocrisy ?
Possibly. But there's only so many ways to do something, especially if you want to do it well. Imagination have lots of patents, as the article explains. I expect to see a lot of lower-level IP being licensed.
I suspect CAD and video production programmers would beg to differ. My experience in image processing certainly contradicts your assertion. Apple has also apparently abandoned the prosumer market so as long as it can run the Starbucks app and PowerPoint most of their customers will be happy.
You don't have a clue what you're talking about. The CPU does most of the work becuase it's the best solution. Specialized hardware requires very static requirements and very high performance requirements. GPUs exist only becuase graphics primitives don't change much and they are very compute intensive. Also throw in that they are easily parallelized.
I have no idea of the point you're trying to make, and neither do you.
There hasn't been a single ARM chip suitable for HPC. A10 has decent scalar performance, making it a good enough solution for casual computing. But for rendering, processing, encoding and pretty much all intensive number crunching its performance is abysmal.
That being said, there is nothing preventing from extending the throughput of SIMD units. NEON is still stuck at 128bit but can easily be expanded to match what we have in x86 - 256 and 512 bits. But then again, transistor count and power usage will rise proportionally, so it will not really have an advantage to x86 efficiency wise.
Apple doesn't seem to be interested in anything outside of the scope of consumer/prosumer computing. Latest Macbooks anyone?
"But for rendering, processing, encoding and pretty much all intensive number crunching its performance is abysmal."
Rendering, encoding, etc. can be offloaded to better/faster dedicated co-processors that run circles around the best core design from Intel or AMD.
The very fact that Apple are designing their own GPUs now supports my argument that they want to build more functionality to those GPUs aside from the current GP-GPU paradigm.
Yep. And ARM ISA chips suitable for HPC are coming this year, notably in the form of the Cavium ThunderX2. I'm expecting big things for that chip. I hope it contains SVE.
I think you're right. Might be a good time to start stockpiling components?
Already started a couple years ago and my wife thinks I'm a hoarder because I have multiple unopened boxes of CPU/GPU's, Motherboards, SBC's, etc... tucked away in a closet for safe keeping. However, I'm really more concerned about devices turning into the Wireless & WIFI only crap. Perhaps I'm just crazy, but one day tin foil hats will be fashionable!! :)
Looks like the future of Apple is ARM, even for their laptops and PCs. With computing relying less and less on CPUs and more on dedicated processing units, the in-house GPU thing makes sense.
I don't think this is about saving money. With about 300 million iPhones and IPads being sold a year, they are paying 25 cents per device which is nothing for Apple
Exactly, 65M+ a year is next to nothing for Apple. And when Intel cant get a GPU out without signing agreement with either AMD or Nvidia, I highly doubt Apple could do any different.
But since Apple pays only ~10M for ARM uArc license + tiny tiny amount per chip sold, my guess is that Apple is trying to lower the Img price tag to something around 20M. Not really sure if all these are worth the hassle for 40M saving.
Yeah, it would probably be cheaper to stay the way things are, but that's not Apple. It's the same reason they stopped licencing ready made ARM cores for chump change and made their own.
that's only for old patent right? but if Apple need to make competitive modern GPU and supporting all the latest feature that still going to touch much more recent Imagination IP.
> Alternatively, Apple may just be tired of paying Imagination $75M+ a year
Yeah, that's all there is to it.
Even with CPUs, they could easily have paid companies like Qualcomm or Nvidia to develop a custom wide CPU for them. Heck, isn't that what Denver is anyway? The first Denver was comfortably beating Apple A8 at the time. Too bad there's no demand for Tegras anymore, Denver v2 might have been good competition for A10. Maybe someone could benchmark a car using it...
Denver could only beat the A8 in software coded for that kind of CPU (vilv). Any kind of spaghetti code left denver choking on it's own spit, and it was more power hungry to boot.
A good first attempt, but nvidia seems to have abandoned it. The fact that nvidia didnt use denver in their own tablet communicated that it was a failure in nvidia's eyes.
That's what I read as well, they will go 2 + 4 with 2 Denver cores and 4 ARM cores (probably A73), letting the ARM cores handle the spaghetti code and the Denver handling the vilv code.
>The first Denver was comfortably beating Apple A8 at the time
Eh, partial truth at best there. Denvers binary translation architecture worked well for straight, predictable code, but as soon as you started getting unpredictable it would choke up. So it suffered a fair bit on simple user facing multitasking for example, or any benchmark with an element of randomness.
Denver 2 with a doubled far cache could have been interesting, I guess we'll see, but Denver didn't exactly light the world on fire.
I'd be curious if this is a sort of power play. This announcement has tanked the Imagination stock (down like 60% this morning). Acquire IMG cheap. Get all that IP and block other corps from access at the same time?
Except Apple didn't manipulate the market, did they? (honest question here). If Apple told Imagination during a phone call they were looking at dropping them in 18-24 months, and Imagination went and made that information public, would that really be Apple manipulating the market? It would seem it would have required Apple to make the public statement. Or is who said it not at all relevant in these kind of cases?
If that's illegal market manipulation I would LOVE to see how the Nokia/Microsoft deal went down. If THAT transaction was legal how it went down, I don't know what could possibly be viewed as a market manipulation tactic. To eliminate your product line, driving down the stock to previously unimaginable levels, then selling the company to a group that is hiring you and you stand to make millions off the sale, and THAT was legal? This would be nothing compared to that.
Public corporations are required to release information that materially changes their circumstances as soon as they receive it. It's irrelevant who creates that information.
One year ago Apple talked to Imagination. Imagination probably thought they were indispensable nd could charge whatever they wanted ("OK, our market cap is 500 million pounds, so how about we'll sell for 650 million") to which Apple said basically "fsck you. We've given you our price, take it or leave it".)
These company sales negotiations are not especially rational. Company CEOs who got there through the science/engineering route tend to be too TIMID, too scared that what they've done can easily be copied, and so they sell too cheap. On the other hand, company CEOs who got there through sales or finance tend to have a wildly over-inflated view of how unique and special their technology is, and so insist on unrealistically high sales prices. This second looks to me like what happened here --- too many IMG execs drank their own koolaid, asking things like "what's a reasonable P/E multiplier? Or should we price based on annual revenue" NT "how hard would it be to duplicate what we offer". Especially when you factor in that all the most ambitious engineers at IMG would likely be happy to leave for Apple if an offer were, regardless of who owns IMG, just because Apple will give them more scope for grand projects.
Well, I don't know how many Imagination engineers would want to up sticks from Cambridge, England and move to California. Some, of course, but probably not many. I don't think Apple has an engineering presence in the UK.
But generally yes, there must be quite a few people feeling sick and rather stupid in Cambridge this week. CSR, who are next door, sold themselves to Qualcomm last year and it's done them no harm.
Why would you imagine that the largest company in the world only has engineering employees in California? Apple has a UK headquarters today, which it is in the process of moving to a substantial new building, basically the UK equivalent of the new Cupertino spaceship campus. (Not identical because this is space Apple is renting, but presumably it will be "Apple-ized"...)
I wonder if they are still going to be support Imagination's PVR format (That's a proprietary texture format). Practically every OpenGL game in the App store is using the PVR format, so if they pulled support for it that would cause mayham.
Excellent article, particularly considering how quickly it was posted following the press release.
One thing, though: it doesn't specifically mention VR, a field of which Tim Cook has said (among a few other pronouncements) “I don’t think it’s a niche, […] It’s really cool and has some interesting applications.” Could it be that Apple has decided that it needs its own architecture to do VR better than the competition?
I think you're right...I think they are eyeing VR/augmented reality and hedging on needing more GPU power. Hell, Intel's current (maybe past now) push with Iris was spurred by Apple forcing them to get stronger IGP.
They wanted better CPU performance and said hell we can do this better ourselves and they did with cyclone and blew everyone away. They'll do the same with the GPU...
This was my first thought too. This is the only field where a drastic departure from the status quo can have a big impact because it is still in its infancy. More power efficiency is really needed to make that push to good enough visual quality in a portable mobile design. They are currently falling behind in Virtual Reality and Augmented Reality to HTC, Samsung, Sony, Facebook, Microsoft, and Google.
That would be cool, the VR focus. Hopefully increase screen resolutions at the same time, VR is atrocious on my 6S (half of 750p per eye up close is not pretty)
There is some evidence that Apple has already switched over the compute core of their current GPU to their own design, as previously theorized by the same guy who figured out that NVIDIA had switched over to tile based rendering.
He goes into the difference between Imagination's compute engine, which can do 16 bit math by running it in 32 bits and then ignoring the other 16 bits you don't care about and Apple's compute engine which does handle 16 bit math.
Now what other heavy workload for mobile devices already uses the hell out of 16 bit math?
The sort of deep learning AI algorithms that Apple prefers to run locally on your mobile devices for privacy reasons.
So basically Apple has decided they can do phone/tablet GPUs better if they do them in house. And I wonder what this means for their continued reliance on Intel/Nvidia/AMD for mac GPUs?
Are they going to make something that they can use in future Mac designs as well as in phones? Switching Mac CPUs to an Apple made design has a big problem because of all the Intel-only software out there, and all the Mac users depending on x86 compatibility for VMs or Bootcamp.
But there's not nearly the same degree of lockin for GPU architecture. If they come up with something better (ie, more power efficient and fast enough) than Intel's GPUs, or better (for pro apps, they don't give a damn about games) than AMD/Nvidia, then that would be an even more interesting shakeup.
Apple's new GPU would have to be top notch. AMD has Vega and Nvidia Volta coming soon. Both of which will be crazy powerful and efficient compared to what we have today. It would be a tough road to get to the point that they could complete. I'd be more than happy for the extra competition in the GPU market. Well, I'm guessing though it would be locked down to a MAC.
As for the x86 stuff. It's here to stay I think. I wonder how much needing to have a CPU x86 holds us back.
Hopefully Imagination's lawyers know their business. Apple engineers stare at a company's owned and selling IP, then says "hey we can do that without paying for the IP". Apple will need to be extremely careful with their design if they don't want to get sued to the moon.
It would be a lot more interesting to me if they scaled this up to the Mac. Then there would be some new blood in the graphics arena, even if the third party was Apple and locked down to Apples hardware. But I wonder how much better they could do than Nvidia on efficiency, if any.
It's tough to say. These GPU manufacturers have people designing them that are committed to a certain way of looking at it. Even moving generation to generation, there's a similarity.
It's possible that Apple's people have different ideas. Something that people speculating on this need to understand is that over the years, Apple bought several small GPU design houses. It's very likely that those people, and their IP, haven't been deposited in a vault somewhere. I expect that they've been working on GPU architecture for quite some time. And they've been hiring GPU engineers for several years, more recently, including a bunch from Imagination a year, or so, ago.
Things could get interesring once Apple starts using HBM for their designs. Right now, AMD and Intel (just today) is capable with HBM2 which will start the drama soon
Since when IMG has to bow down to Apple and accept any offer thrown their way? IMG's IP is strong and no doubt Apple is gonna lose badly in the courts.
This is awesome news, now Apple can do for GPUs what they did for CPUs in the mobile market. Look at ARM CPUs IPC before Apple came along, it was crap compared to Intel.
Look at Imagination GPUs compared to ATI/NVIDIA, its complete junk. Now we'll finally get developers creating serious games for the mobile markets instead of being second fiddle to the consoles/desktop segments. Ultimately this will force the Android HW makers to also get serious about SoC GPUs too (and catch up to the Nvidia Shield obviously).
imagination gpu's are not junk compared to the big discrete gpu's. They use way less power so ofc they aren't going to be as fast but watts per gflop they are doin just fine
infact imagination GPU's are the number 1 best performing low power GPU's that were designed from the ground up to be extremely fast for low power use. They aren't meant to scale up like the big boys and the dont have have to. Specialized hardware is always going to do a better job the hardware the can scale at all power levels.
But they're obviously not perf/watt leaders across the entire V/f curve right?
If I had a GPU that was the perf/watt leader at 10nW for VR, you wouldn't care because operating at 10nW isn't very interesting. Same way IMG's design might be fine for 1W, but if you're operating at 10W for a VR device that Apple is potentially targetting or 50W for a console or home appliance, IMG just won't cut it. Ppl who want quality VR aren't expecting to run it off just a 1W GPU.
>serious game devs on mobile This is a serious case of "if we start spouting bullshit early enough we can claim we were right because people were saying it was true for years!" Mobile games industry is same as it ever was: a wart on the ass of tv ads.
Couldn't Apple just buy Nvidia or for that matter Imagination. They have $215 billion in cash reserves ,Imagination's market cap is less than a billion.
Some time ago (about a year?) Nvidia announced they were willing to not only sell GPU's but also license their IP to customers, would a switch to NVidia IP fit the timetable?
Given IMG's designs, patent portfolio and today's stock price, wouldn't they be quite an attractive purchase to vendors like Samsung and Intel that need a boost in their own GPU development?
Intel? No, they already licensed AMD's tech and Samsung is satisfied with the graphics they get from Qualcomm and ARM. Until high quality VR becomes a real thing on phones there's simply no need for more GPU power than an Adreno 530.
iDevice's year+ performance gap over Android devices will only continue to widen. Performance is going to matter more and more as things like AR on device becomes important.
I don't get why people think Apple is so beholden to IMG's patents,etc. As of today, IMG's stock is down 61.6% already. Their market cap is now just $290 Million. Apple could easily buy them at a premium without a hiccup. The point being, if Apple wanted to buy them, they'd already own them. Maybe they forced this option as a negotiation tactic. Either way, Apple doesn't make strategic moves like this to save money. They do it to provide their products with a competitive advantage. Whatever Apple develops is not burdened with the need to appeal to a wide range of customers. Apple can develop exactly what they want for their chips. This should indeed be interesting.
Apple should just buy them. It's a rounding error cost to them and they can then build on the platform just how they like. Seems like a no brainer, especially as the stock has crashed!
I think this might be on the nose... It isn't an accident that Apple also invited journalists to announce their commitment to the pro market...
For me the first step will be the iMac, (iMac Pro was even mentioned)... I think Apple wanted control because they felt that by the time they adapted the third parties GPUs to their designs, they were outdated and underpowered.
We'll probably see this year refreshed iMacs with a GPU made by Apple. Scaling between models pricing. OpenGL, OpenCL but mainly Metal (dsp for certain task, management of screen refresh rates etc), maybe HSA (that is wishful thinking though).
Still they will hopefully allow better handling of external GPUs though (for CUDA).
Then when the data is in, they'll be able to finalise the design of the GPUs for the 2018 Mac Pros, in case anything was wrong or unexpected. Apple in the "i" era has been very iterative with great success.
"We’re making a documentary about how wearable technology can help improve fitness, overall health and overcome obstacles. We’re looking for stories that are heartfelt, make you smile, that are surprising and unique. Simply losing weight isn’t enough for the stories we’re looking for. There needs to be a personal, mesmerizing, and unique aspect to each story that draws you in and makes you understand how technology can enhance your life." Tell your story.
Last year I read reports that AMD and Apple are "cooperating on GPUs", and I assumed the thinner MBP GPUs were the result of that. But maybe those were just a first step?
Given the logic expressed in this article as to why Apple pursued autonomy in the CPU space, and how the forms a lens for us to look at their GPU ambitions, surely it makes more sense for them to tank ImTech's share price... and then buy them.
It gives them a world class GPU tech and patent portfolio... and their very own CPU architechure (MIPS).
Apple made two large R&D investments in China, and last week agreed to two more R&D facilities in China. All these engineers and billion dollar investments there should be kept busy, and Apple will need to cut somewhere in the US. If Apple doesn't invest in China, and eventually doesn't transfer know how and technology there, Chinese will shut the market for them - like they did several times already, with "iStore" outages :) :)
I'd like to believe this is a smart move considering Imagination Technologies just brokered a deal w/ Qualcomm and because Apple's HW secrets seem to be compromised by agencies affiliated w/ the CIA.
Ever since 2011, the year Microsoft/Nvidia turned it's graphics API's (i.e. DirectX 11+) into undetectable ominous spyware, I've been increasingly paranoid these outsourced GPU/SoC architectures have opened up arbitrary but functional methods for injecting unmanaged code into their host operating systems. I've read several stories of Qualcomm having an extensive dirty little past of surveillance secrets embedded in their devices.
So my major point of contention here is: If IMG Tech uses the same PowerVR architecture for both Apple and Qualcomm/Android, would this enable hacked Qualcomm devices to become a test bench for designing software & firmware to infect Apple products?
You really don't need a hardware back door on Apples devices now, the best attacks are performed on automatic backups made by iTunes when a device is connected to the computer or when a device uploads data into the cloud. Apple gets to say it's device security is unaffected by the FBI/CIA while they get basically any information they want with some brute force and court orders.
Apple intentionally made the encryption applied to those backups weaker. Apple got it's way
"Imagination Technologies just brokered a deal w/ Qualcomm" What is the source of this info? Where Qualcomm will use IMG GPU? What about their own Adreno?
I have to wonder if this isn't about being competitive with CUDA/OpenCL engines that run on GPU. Not only can this dramatically speed up some compression and encoder algorithms running on a GPU, but right now on a Mac, there is no support for things like Tensorflow running on an Imagination GPU. Apple hardware can't play along.
Actually OpenCL on the macPro was one thing that was working and using both GPUs. The problem then was that the tech wasn't properly used by software developers and such and then the GPUs didn't evolve. Check the Luxmark 3 Benchmarks http://barefeats.com/hilapmore.html
Apple's Final Cut Pro too is optimised etc.
Apple's true failure was believing that the entire industry was going to switch towards more abstraction of the hardware so that 2 GPUs could fight with single GPUs. HSA still is in the future... crossFire and SLI look to be dying...
Apple now feels it's time to customer design it and ensure they aren't dependent on other's choices.
You can do neural network training in Tensorflow and then export that neural net to run under Apple's Basic Neural Network Subroutines API which was added in the last version of MacOS and iOS.
Apple may well have some of their own graphics IP to bargain with. Technically, this isn't their first GPU. They shipped the short-lived QuickDraw 3D accelerator in 1995.
Looks like the future of Apple is ARM, even for their laptops and PCs. With computing relying less and less on <a href="http://www.emobly.com/">CPUs</a> and more on dedicated processing units, the in-house GPU thing makes sense.
In no way will this save Apple money when they are currently paying Imagination around 30 cents per device. For Apple, that is pocket change. It is probably worth $60 million per year for Apple to pay Imagination to avoid lawsuits over the IP, because they will need to defend against AMD, nVidia, Intel and Qualcomm as well, which means the need to be able to counter sue using Imagination's patents. Otherwise, Apple is a prime target for the picking.
Whatever, what I see is the prescience of AMD in seeing correctly (IMO), that the future belonged to having a presence in both vital processors to IT solutions - cpu & gpu, ~15 years ago. (formally bought ATI ~2005)
Now we have apple and intel (that recent israeli company buy) in a mad scramble to catch up.
A beancounter footnote is that the automotive IT markey is forecast at $120B by 2020, dwarfing all current IT markets.
This is largely about converting the output of ~camera sensors ~vast outputs into usable AI for; better, safer, and even, autonomous driving. CPU plays a subsidiary role in this.
Time to market is a critical factor. AMD is by far best placed to offer automotive engineers a unified architecture and set of developer tools.
Dealing with one set of dilberts, is always better than two.
There is a big hole between PowerVR GPU and current market players(NV). Apple need more scalable, flexible and more efficient GPU for future product. Look at Apple CPU, in one core efficiency it's close to Intel, we don't know what happens when Apple multi core CPU or implement multithreading.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
144 Comments
Back to Article
lilmoe - Monday, April 3, 2017 - link
Shots fired! Yikes.lilmoe - Monday, April 3, 2017 - link
Looks like the future of Apple is ARM, even for their laptops and PCs. With computing relying less and less on CPUs and more on dedicated processing units, the in-house GPU thing makes sense.This is more about reducing cost and turning the personal computing industry, as we know it, into an appliance-like industry. Instead of making a do-it-all device, it's more profitable designing and selling dedicated hardware for dedicated/specific tasks. Pretty darn obvious if you ask me.
Tangey - Monday, April 3, 2017 - link
Surely this is entirely different from Apple's current CPU situation. They are still paying ARM an Arch license are they not ? Whereas IMG is saying Apple's view is that they won't be paying for anything from IMG.Tangey - Monday, April 3, 2017 - link
Also the statement says Apple won't be using them in NEW products in 15-24 months. That must surely mean new designs. By inference that means IMG should still be in any new products launched in the next 15 ish months. Reading between the lines it looks like Sept 2018 is being targetted.nathanddrews - Monday, April 3, 2017 - link
New Apple GPU codename: Shimpi.tipoo - Monday, April 3, 2017 - link
AMD GPUs have the Wasson unit, now Apples GPUs will have the Shimpi Unit.Dennis Travis - Tuesday, April 4, 2017 - link
Shimpi sure sounds familar! :D Grinlilmoe - Monday, April 3, 2017 - link
I wouldn't be too surprised to see them move away from ARM in the longer term as well. Maybe a new uArc announcement is on the horizon.lilmoe - Monday, April 3, 2017 - link
I didn't say this right at first, the future isn't "ARM" per se, but more like in-house ARM _like_.ImSpartacus - Monday, April 3, 2017 - link
Are you suggesting that Apple will build its own isa?!That's a moderately large undertaking.
It's onething to design your own cores and uncore. That's attainable. But ditch the entire isa?
lilmoe - Monday, April 3, 2017 - link
In the short term? No. Evidently? Highly possible. Nothing's stopping them.psychobriggsy - Monday, April 3, 2017 - link
RISC-V would be a potential free-to-license ISA that has had a lot of thought put into it.But maybe for now ARM is worth the license costs for Apple.
vFunct - Monday, April 3, 2017 - link
Thing is, Arm is already Apple originated, being funded by Apple for their Newton.But, given the rumors of Apple buying Toshiba's NAND flash fabs, it seems more likely that Apple is going all in on in-house manufacturing and development of everything, including ISA and fabs.
vladx - Monday, April 3, 2017 - link
Apple owning their own fabs? Seriously doubt it, the investment is not worth it for just in-house manufacturing.Lolimaster - Monday, April 3, 2017 - link
And if your sales kind of plummet, the fab costs will make you sink.FunBunny2 - Monday, April 3, 2017 - link
-- That's a moderately large undertaking.that's kind of an understatement. the logic of the ALU, for instance, has been known for decades. ain't no one suggested an alternative. back in the good old days of IBM and the Seven Dwarves, there were different architectures (if one counts the RCA un-licenced 360 clone as "different") which amounted to stack vs. register vs. direct memory. not to mention all of the various mini designs from the Departed. logic is a universal thing, like maths: eventually, there's only one best way to do X. thus, the evil of patents on ideas.
Alexvrb - Monday, April 3, 2017 - link
The underlying design and the ISA don't have to be tightly coupled. Look at modern x86, they don't look much like oldschool CISC designs. If they're using a completely in-house design, there's no reason they couldn't start transitioning to MIPS64 or whatever at some point.Anyway I'm sad to see Apple transitioning away from PowerVR designs. That was the main reason their GPUs were always good. Now there might not be a high-volume product with a Furian GPU. :(
FunBunny2 - Tuesday, April 4, 2017 - link
-- Look at modern x86, they don't look much like oldschool CISC designs.don't conflate the RISC-on-the-hardware implementation with the ISA. except for 64 bit and some very CISC extended instructions, current Intel cpu isn't RISC or anything else but CISC to the C-level coder.
willis936 - Wednesday, April 5, 2017 - link
"Let's talk about the hardware. Now ignore the hardware."name99 - Monday, April 3, 2017 - link
I think it's perhaps too soon to analyze THAT possibility (apple-specific ISA). Before that, we need to see how the GPU plays out. Specifically:The various childish arguments being put forth about this are obviously a waste of time. This is not about Apple saving 30c per chip, and it's not about some ridiculous Apple plot to do something nefarious. What this IS about, is the same thing as the A4 and A5, then the custom cores --- not exactly *control* so much as Apple having a certain vision and desire for where they want to go, and a willingness to pay for that, whereas their partners are unwilling to be that ambitious.
So what would ambition in the space of GPUs look like? A number of (not necessarily incompatible) possibilities spring to mind. One possibility is much tighter integration between the CPU and the GPU. Obviously computation can be shipped from the CPU to the GPU today, but it's slower than it should be because of getting the OS involved, having to copy data a long distance (even if HSA provides a common memory map and coherency). A model of the GPU as something like a sea of small, latency tolerant, AArch64 cores (ie the Larrabee model) is an interesting option. Obviously Intel could not make that work, but does that mean that the model is bad, that Intel is incompetent, that OpenGL (but not Metal) was a horrible target, that back then transistors weren't yet small enough?
With such a model Apple starts to play in a very different space, essentially offering not a CPU and a GPU but latency cores (the "CPU" cores, high power and low power) and throughput cores (the sea of small cores). This sort of model allows for moving code from one type of core to another as rapidly as code moves from one CPU to another on a multi-core SoC. It also allows for the latency tolerant core to perhaps be more general purpose than current GPUs, and so able to act as more generic "accelerators" (neuro, crypto, compression --- though perhaps dedicated HW remains a better choice for those?)
Point is, by seeing how Apple structure their GPU, we get a feeling for how large scale their ambitions are. Because if their goal is just to create a really good "standard" OoO CPU, plus standard GPU, then AArch64 is really about as good as it gets. I see absolutely nothing in RISC-V (or any other competitor) that justifies a switch.
But suppose they are willing to go way beyond a "standard" ISA? Possibilities could be VLIW done right (different model for the split between compiler and HW as to who tracks which dependencies) or use of relative rather than absolute register IDs (ie something like the Mill's "belt" concept). In THAT case a new instruction set would obviously be necessary.
I guess we don't need to start thinking about this until Apple makes bitcode submission mandatory for all App store submissions --- and we're not even yet at banning 32-bit code completely, so that'll be a few years. Before then, just how radical Apple are in their GPU design (ie apparently standard GPU vs sea of latency tolerant AArch-64-lite cores) will tell us something about how radical their longterm plans are.
And remember always, of course, this is NOT just about phones. Don't you think Apple desktop is as pissed off with the slow pace and lack of innovation of Intel? Don't you think their data-center guys are well aware of all that experimentation inside Google and MS with FPGAs and alternative cores and are designing their own optimized SoCs? At least one reason to bypass IMG is if IMG's architecture maxes out at a kick-ass iPad, whereas Apple wants an on-SoC GPU that, for some of their chips at least, is appropriate to a new ARM-based iMac 5K and Mac Pro.
Meteor2 - Tuesday, April 4, 2017 - link
What name99 said. Which is awfully like what Qualcomm is doing, isn't it? A bunch of conceptually-different processor designs in one 'platform'. Software uses whichever is most appropriate.peevee - Tuesday, April 18, 2017 - link
It is certainly easier to design your own ISA than to build your own core for somebody else's ISA. And ARM64 us FAR from perfect. So 1980s.quadrivial - Monday, April 3, 2017 - link
Very unlikely. They gave up that chance a couple years ago (ironically, to imagination).Consider, it takes 4-5 years from initial architecture design to final shipment. No company is immune to this timeframe no matter how large. Even more time is required for new ISAs because there are new, unexpected edge cases that occur.
Consider, ARM took about 4 years to ship from the time the ISA was announced. Most of their licensees took closer to 5 years. Apple took a bit less than 2 years.
Consider, Apple was a front-runner to buy MIPS so they could have their own ISA, but they backed out instead. The new ARM ISA is quite similar to MIPS64.
Thought, Apple started designing a uarch that could work well with either MIPS or their new ARMv8. A couple years in (about the time nailing down the architecture would start to become unavoidable), they show ARM a proposal for a new ISA and recommend ARM adopt that ISA otherwise they buy MIPS and switch. ARM announces a new ISA and immediately has teams start working on it, but Apple has a couple year head start. Apple won big because they shipped an extremely fast CPU a full two years before their competitors and even more years for their competitors to catch up.
Maybe imperfect, but its the best explanation I can come up with for how events occurred.
TheMysteryMan11 - Monday, April 3, 2017 - link
Computing still heavily relies on CPU for all the things that matter to power users. ARM is long way away from being powerful enough to actually be useful for power users and creators.It is good enough for consumption and getting better.
But then again, Apple hasnt been doing well catering to creators anyway. Still no refresh for Mac Pro. So you might be right. But that means Apple is Ok with ignoring that segment, which they probably are.
lilmoe - Monday, April 3, 2017 - link
Single purpose equipment aren't mainly CPU dependent. This is my point. Relying on the CPU for general purpose functionality is inherently the least efficient, especially for consumer workloads.Outside the consumer market, for example engineering and video production software, are still very CPU dependent because the software isn't written efficiently. It's so for the sole purpose of supporting the most amount of currently available hardware. I'd argue that if a CAD program was re-written from the ground up to be hardware dependent and GPU accelerated ONLY, then it would run faster and more fluidly on an iPad than on a Core i7 with integrated graphics, if the storage speed was the same.
This leaves only niche applications that are inherently dependent on a CPU, and can't be offloaded to hardware accelerates. With more work on efficient multi-threaded coding, Apple's own CPU cores, in a quad/octa configuration, can arguably suffice. Single-threaded applications are also arguably good enough, even on A72/A73 cores.
Again, this conversation is about consumer/prosumer workloads. It's evident that Apple isn't interested in Server/corporate workloads.
This has been Apple's vision since inception. They want to do everything in-house as a single package for a single purpose. They couldn't in the past, and almost went bankrupt, because they weren't _big enough_. This isn't the case now.
The future doesn't look very bright for the personal computing industry as we know it. There has been talk and rumors that Samsung is taking a VERY similar approach. Rumors started hitting 2 years ago that they were also building their own in-house GPU, and are clashing with Nvidia and AMD graphics IP in the process. It also lead Nvidia to sue Samsung for reasons only known behind the scenes.
ddriver - Monday, April 3, 2017 - link
Yeah let's make the x chip that can only do one n task. And while we are at it, why not scrap all those cars you can drive anywhere there is an open road, and make cars which are best suited for one purpose. You need a special car to do groceries, a special car to go to work, a bunch of different special cars when you go on vacation, depending on what kind of vacation it is.Implying prosumer software isn't properly written is laughable, and a clear indication you don't have a clue. That's the only kind of software that scales well with cores, and can scale linearly to as many threads as you have available.
But I'd have to agree that crapple doesn't really care about making usable hardware, their thing is next-to-useless toys, because it doesn't matter how capable your hardware is, what matters is how much of that lacking and desperately needed self esteem you get from buying their branded, overpriced toy.
Back in the days apple did good products and struggled to make profit, until genius Jobs realized how profitable it can be to exploit dummies and make them even dummer, giving rise to crapple, and the shining beacon of an example that the rest of the industry is taking, effectively ruining technology and reducing it to a fraction of its potential.
lilmoe - Monday, April 3, 2017 - link
Chill bro.I said current software is written to support the most amount of hardware combinations possible. And yes, that's NOT the most efficient way to write software, but it _is_ the most accessible for consumers.
I wasn't implying that one way is better than the other. But it's also true that a single $200 GPU runs circles around $1500 10 core Intel CPU in rendering.
steven75 - Monday, April 3, 2017 - link
How amazing that an "overpriced toy" still shames all Android manufacturers in single thread performance. The brand new S8 (with a price increase, no-less) can't even beat a nearly 2 year old iPhone 6S.I wish all "toys" were superior like that!
fanofanand - Monday, April 3, 2017 - link
How many people are working (like actual productivity) on an S8? Cell phones are toys 99% of the time.FunBunny2 - Monday, April 3, 2017 - link
-- How many people are working (like actual productivity) on an S8? Cell phones are toys 99% of the time.I guess Mao was right.
Meteor2 - Wednesday, April 5, 2017 - link
Not many people 'work' on a phone -- now. But with Continuum and Dex, that number is going to rise.raptormissle - Monday, April 3, 2017 - link
You forgot the second part where Android SoC's shame Apple's SoC in multi-core performance. If you're going to selectively bring up single core then you should have also mentioned multi-core.ddriver - Tuesday, April 4, 2017 - link
Yeah, and if only phones were all about single threaded performance, or even performance in general. I still run an ancient note 3 and despite being much slower than current flagship devices, it is still perfectly usable for me, and I do with it more things than you do on a desktop PC.cocochanel - Tuesday, April 4, 2017 - link
You're looking at it from the wrong angle. The numbers speak for themselves and it all comes down to how much a company can spend on R&D. Plus how important components like GPU's have become to computing in general.With such small annual revenue, how much can IMG spent on R&D ? 10 million ? 20 ?
Apple can easily spend 10-20 times that amount and not even feel a scratch. Everything being equal, how much you put into something is how much you're getting out. You want a top of the line product ? Well, it's going to cost you. If Apple is to stay at the top, their GPU's need to be on the same level as their CPU's.
Plus, GPU's these days are used for all kinds of other things other than graphics. Look how lucrative the automotive business is for Nvidia not to mention GPU based servers.
As for litigation and patents, gimme a break. What, just because a company bought a GPU from another for a long time, now they are supposed to do it forever ? When Apple started to buy them from IMG 10-15 years ago, it made sense at the time. Now, the market is different and so are the needs. Time to move on.
When a company doesn't spend much on R&D, they get accused of complacency. When they want to spend a lot, now they get threatened with lawsuits. How is that for hypocrisy ?
Meteor2 - Wednesday, April 5, 2017 - link
Possibly. But there's only so many ways to do something, especially if you want to do it well. Imagination have lots of patents, as the article explains. I expect to see a lot of lower-level IP being licensed.BedfordTim - Monday, April 3, 2017 - link
I suspect CAD and video production programmers would beg to differ. My experience in image processing certainly contradicts your assertion.Apple has also apparently abandoned the prosumer market so as long as it can run the Starbucks app and PowerPoint most of their customers will be happy.
prisonerX - Monday, April 3, 2017 - link
You don't have a clue what you're talking about. The CPU does most of the work becuase it's the best solution. Specialized hardware requires very static requirements and very high performance requirements. GPUs exist only becuase graphics primitives don't change much and they are very compute intensive. Also throw in that they are easily parallelized.I have no idea of the point you're trying to make, and neither do you.
ddriver - Monday, April 3, 2017 - link
There hasn't been a single ARM chip suitable for HPC. A10 has decent scalar performance, making it a good enough solution for casual computing. But for rendering, processing, encoding and pretty much all intensive number crunching its performance is abysmal.That being said, there is nothing preventing from extending the throughput of SIMD units. NEON is still stuck at 128bit but can easily be expanded to match what we have in x86 - 256 and 512 bits. But then again, transistor count and power usage will rise proportionally, so it will not really have an advantage to x86 efficiency wise.
lilmoe - Monday, April 3, 2017 - link
Apple doesn't seem to be interested in anything outside of the scope of consumer/prosumer computing. Latest Macbooks anyone?"But for rendering, processing, encoding and pretty much all intensive number crunching its performance is abysmal."
Rendering, encoding, etc. can be offloaded to better/faster dedicated co-processors that run circles around the best core design from Intel or AMD.
The very fact that Apple are designing their own GPUs now supports my argument that they want to build more functionality to those GPUs aside from the current GP-GPU paradigm.
psychobriggsy - Monday, April 3, 2017 - link
ARM offer SVE (IIRC) that allows 512-2048-bit wide SIMD for HPC ARM designs.It has been suggested that Apple's GPU may in-fact be more Larrabee-like, but using SVE with Apple's small ARM cores.
Meteor2 - Wednesday, April 5, 2017 - link
Yep. And ARM ISA chips suitable for HPC are coming this year, notably in the form of the Cavium ThunderX2. I'm expecting big things for that chip. I hope it contains SVE.DezertEagle - Tuesday, April 4, 2017 - link
I think you're right. Might be a good time to start stockpiling components?Already started a couple years ago and my wife thinks I'm a hoarder because I have multiple unopened boxes of CPU/GPU's, Motherboards, SBC's, etc... tucked away in a closet for safe keeping. However, I'm really more concerned about devices turning into the Wireless & WIFI only crap. Perhaps I'm just crazy, but one day tin foil hats will be fashionable!! :)
farloo - Tuesday, April 11, 2017 - link
Looks like the future of Apple is ARM, even for their laptops and PCs. With computing relying less and less on CPUs and more on dedicated processing units, the in-house GPU thing makes sense.farloo - Tuesday, April 11, 2017 - link
http://www.emobly.com/msroadkill612 - Monday, May 1, 2017 - link
And shots swallowed at Imagination.Lord-Bryan - Monday, April 3, 2017 - link
Wow, i knew that Apple was stockpiling gpu engineers, but this was kind of "unexpected"agoyal - Monday, April 3, 2017 - link
I don't think this is about saving money. With about 300 million iPhones and IPads being sold a year, they are paying 25 cents per device which is nothing for Appleiwod - Monday, April 3, 2017 - link
Exactly, 65M+ a year is next to nothing for Apple. And when Intel cant get a GPU out without signing agreement with either AMD or Nvidia, I highly doubt Apple could do any different.But since Apple pays only ~10M for ARM uArc license + tiny tiny amount per chip sold, my guess is that Apple is trying to lower the Img price tag to something around 20M. Not really sure if all these are worth the hassle for 40M saving.
tipoo - Monday, April 3, 2017 - link
Yeah, it would probably be cheaper to stay the way things are, but that's not Apple. It's the same reason they stopped licencing ready made ARM cores for chump change and made their own.ET - Monday, April 3, 2017 - link
> Imagination has a significant number of GPU patents (they’ve been at this for over 20 years)And given that the patent period is 20 years, that's probably why we're seeing Apple doing it now, as basic ImgTec patents are starting to expire.
renz496 - Monday, April 3, 2017 - link
that's only for old patent right? but if Apple need to make competitive modern GPU and supporting all the latest feature that still going to touch much more recent Imagination IP.trane - Monday, April 3, 2017 - link
> Alternatively, Apple may just be tired of paying Imagination $75M+ a yearYeah, that's all there is to it.
Even with CPUs, they could easily have paid companies like Qualcomm or Nvidia to develop a custom wide CPU for them. Heck, isn't that what Denver is anyway? The first Denver was comfortably beating Apple A8 at the time. Too bad there's no demand for Tegras anymore, Denver v2 might have been good competition for A10. Maybe someone could benchmark a car using it...
TheinsanegamerN - Monday, April 3, 2017 - link
Denver could only beat the A8 in software coded for that kind of CPU (vilv). Any kind of spaghetti code left denver choking on it's own spit, and it was more power hungry to boot.A good first attempt, but nvidia seems to have abandoned it. The fact that nvidia didnt use denver in their own tablet communicated that it was a failure in nvidia's eyes.
tipoo - Monday, April 3, 2017 - link
Parker will use Denver 2 I believe, but paired with stock big ARM cores as well, probably to cover for its weaknesses.fanofanand - Monday, April 3, 2017 - link
That's what I read as well, they will go 2 + 4 with 2 Denver cores and 4 ARM cores (probably A73), letting the ARM cores handle the spaghetti code and the Denver handling the vilv code.tipoo - Monday, April 3, 2017 - link
>The first Denver was comfortably beating Apple A8 at the timeEh, partial truth at best there. Denvers binary translation architecture worked well for straight, predictable code, but as soon as you started getting unpredictable it would choke up. So it suffered a fair bit on simple user facing multitasking for example, or any benchmark with an element of randomness.
Denver 2 with a doubled far cache could have been interesting, I guess we'll see, but Denver didn't exactly light the world on fire.
dud3r1no - Monday, April 3, 2017 - link
I'd be curious if this is a sort of power play.This announcement has tanked the Imagination stock (down like 60% this morning). Acquire IMG cheap. Get all that IP and block other corps from access at the same time?
Ultraman1966 - Monday, April 3, 2017 - link
Anti trust laws says they can't do that.melgross - Monday, April 3, 2017 - link
What anti trust laws? Is Apple the biggest GPU manufacturer around?Eden-K121D - Monday, April 3, 2017 - link
Market Manipulation. I think SEC and FSA won't be pleasedDJFriar - Monday, April 3, 2017 - link
Except Apple didn't manipulate the market, did they? (honest question here). If Apple told Imagination during a phone call they were looking at dropping them in 18-24 months, and Imagination went and made that information public, would that really be Apple manipulating the market? It would seem it would have required Apple to make the public statement. Or is who said it not at all relevant in these kind of cases?fanofanand - Monday, April 3, 2017 - link
If that's illegal market manipulation I would LOVE to see how the Nokia/Microsoft deal went down. If THAT transaction was legal how it went down, I don't know what could possibly be viewed as a market manipulation tactic. To eliminate your product line, driving down the stock to previously unimaginable levels, then selling the company to a group that is hiring you and you stand to make millions off the sale, and THAT was legal? This would be nothing compared to that.prisonerX - Tuesday, April 4, 2017 - link
Public corporations are required to release information that materially changes their circumstances as soon as they receive it. It's irrelevant who creates that information.name99 - Monday, April 3, 2017 - link
Oh, take off your tin-foil hat.We know the sequence of events.
https://arstechnica.com/apple/2016/03/apple-acquir...
One year ago Apple talked to Imagination. Imagination probably thought they were indispensable nd could charge whatever they wanted ("OK, our market cap is 500 million pounds, so how about we'll sell for 650 million") to which Apple said basically "fsck you. We've given you our price, take it or leave it".)
These company sales negotiations are not especially rational.
Company CEOs who got there through the science/engineering route tend to be too TIMID, too scared that what they've done can easily be copied, and so they sell too cheap.
On the other hand, company CEOs who got there through sales or finance tend to have a wildly over-inflated view of how unique and special their technology is, and so insist on unrealistically high sales prices. This second looks to me like what happened here --- too many IMG execs drank their own koolaid, asking things like "what's a reasonable P/E multiplier? Or should we price based on annual revenue" NT "how hard would it be to duplicate what we offer". Especially when you factor in that all the most ambitious engineers at IMG would likely be happy to leave for Apple if an offer were, regardless of who owns IMG, just because Apple will give them more scope for grand projects.
Meteor2 - Wednesday, April 5, 2017 - link
Well, I don't know how many Imagination engineers would want to up sticks from Cambridge, England and move to California. Some, of course, but probably not many. I don't think Apple has an engineering presence in the UK.But generally yes, there must be quite a few people feeling sick and rather stupid in Cambridge this week. CSR, who are next door, sold themselves to Qualcomm last year and it's done them no harm.
name99 - Wednesday, April 12, 2017 - link
Why would you imagine that the largest company in the world only has engineering employees in California?Apple has a UK headquarters today, which it is in the process of moving to a substantial new building, basically the UK equivalent of the new Cupertino spaceship campus. (Not identical because this is space Apple is renting, but presumably it will be "Apple-ized"...)
http://www.theverge.com/2016/9/29/13103702/apple-u...
lefty2 - Monday, April 3, 2017 - link
I wonder if they are still going to be support Imagination's PVR format (That's a proprietary texture format). Practically every OpenGL game in the App store is using the PVR format, so if they pulled support for it that would cause mayham.Hamm Burger - Monday, April 3, 2017 - link
Excellent article, particularly considering how quickly it was posted following the press release.One thing, though: it doesn't specifically mention VR, a field of which Tim Cook has said (among a few other pronouncements) “I don’t think it’s a niche, […] It’s really cool and has some interesting applications.” Could it be that Apple has decided that it needs its own architecture to do VR better than the competition?
ATC9001 - Monday, April 3, 2017 - link
I think you're right...I think they are eyeing VR/augmented reality and hedging on needing more GPU power. Hell, Intel's current (maybe past now) push with Iris was spurred by Apple forcing them to get stronger IGP.They wanted better CPU performance and said hell we can do this better ourselves and they did with cyclone and blew everyone away. They'll do the same with the GPU...
DroidTomTom - Monday, April 3, 2017 - link
This was my first thought too. This is the only field where a drastic departure from the status quo can have a big impact because it is still in its infancy. More power efficiency is really needed to make that push to good enough visual quality in a portable mobile design. They are currently falling behind in Virtual Reality and Augmented Reality to HTC, Samsung, Sony, Facebook, Microsoft, and Google.tipoo - Monday, April 3, 2017 - link
That would be cool, the VR focus. Hopefully increase screen resolutions at the same time, VR is atrocious on my 6S (half of 750p per eye up close is not pretty)BillBear - Tuesday, April 4, 2017 - link
There is some evidence that Apple has already switched over the compute core of their current GPU to their own design, as previously theorized by the same guy who figured out that NVIDIA had switched over to tile based rendering.http://www.realworldtech.com/apple-custom-gpu/
He goes into the difference between Imagination's compute engine, which can do 16 bit math by running it in 32 bits and then ignoring the other 16 bits you don't care about and Apple's compute engine which does handle 16 bit math.
Now what other heavy workload for mobile devices already uses the hell out of 16 bit math?
The sort of deep learning AI algorithms that Apple prefers to run locally on your mobile devices for privacy reasons.
Meteor2 - Wednesday, April 5, 2017 - link
Ooo, good spot!Glaurung - Monday, April 3, 2017 - link
So basically Apple has decided they can do phone/tablet GPUs better if they do them in house. And I wonder what this means for their continued reliance on Intel/Nvidia/AMD for mac GPUs?Are they going to make something that they can use in future Mac designs as well as in phones? Switching Mac CPUs to an Apple made design has a big problem because of all the Intel-only software out there, and all the Mac users depending on x86 compatibility for VMs or Bootcamp.
But there's not nearly the same degree of lockin for GPU architecture. If they come up with something better (ie, more power efficient and fast enough) than Intel's GPUs, or better (for pro apps, they don't give a damn about games) than AMD/Nvidia, then that would be an even more interesting shakeup.
Eyered - Monday, April 3, 2017 - link
Apple's new GPU would have to be top notch. AMD has Vega and Nvidia Volta coming soon. Both of which will be crazy powerful and efficient compared to what we have today. It would be a tough road to get to the point that they could complete. I'd be more than happy for the extra competition in the GPU market. Well, I'm guessing though it would be locked down to a MAC.As for the x86 stuff. It's here to stay I think. I wonder how much needing to have a CPU x86 holds us back.
epobirs - Monday, April 3, 2017 - link
Microsoft is adding x86 support to Windows 10 on ARM. Apple should be able to do the same with MacOS,loekf - Tuesday, April 4, 2017 - link
AFAIK, the Darwin kernel (or OSX itself) already support 64-bits ARM as an architecture. Guess it's just for Apple's internal usage.Still, I doubt if Apple would do another architecture change for OSX, after PPC to x86 a while ago.
name99 - Tuesday, April 4, 2017 - link
Jesus, the ignorance. What kernel do you think iOS uses?willis936 - Monday, April 3, 2017 - link
Hopefully Imagination's lawyers know their business. Apple engineers stare at a company's owned and selling IP, then says "hey we can do that without paying for the IP". Apple will need to be extremely careful with their design if they don't want to get sued to the moon.tipoo - Monday, April 3, 2017 - link
It would be a lot more interesting to me if they scaled this up to the Mac. Then there would be some new blood in the graphics arena, even if the third party was Apple and locked down to Apples hardware. But I wonder how much better they could do than Nvidia on efficiency, if any.melgross - Monday, April 3, 2017 - link
It's tough to say. These GPU manufacturers have people designing them that are committed to a certain way of looking at it. Even moving generation to generation, there's a similarity.It's possible that Apple's people have different ideas. Something that people speculating on this need to understand is that over the years, Apple bought several small GPU design houses. It's very likely that those people, and their IP, haven't been deposited in a vault somewhere. I expect that they've been working on GPU architecture for quite some time. And they've been hiring GPU engineers for several years, more recently, including a bunch from Imagination a year, or so, ago.
mrtanner70 - Monday, April 3, 2017 - link
My guess is we will ultimately discover they have expanded their architectural license with Arm to include graphics.tipoo - Monday, April 3, 2017 - link
Does ARM do a TBR like Imagination though? Losing the TBR would be a big efficiency blow.loekf - Tuesday, April 4, 2017 - link
ARM"s Mali GPUs come from a company called Falanx they bought some time ago. Yes, Falanx is also using some form of tile based rendering:https://www.hardocp.com/article/2005/08/02/mali200...
Tile based rendering is not an IMG exclusive anymore. Even Nvidia in Pascal is using it these days.
vladx - Tuesday, April 4, 2017 - link
"Tile based rendering is not an IMG exclusive anymore. Even Nvidia in Pascal is using it these days."That's because both companies have patented tech the other needs to use in their products, thus cross-sharing patents is very common in the GPU world.
Shadowmaster625 - Monday, April 3, 2017 - link
Someday INTC will get their turn.zodiacfml - Monday, April 3, 2017 - link
Things could get interesring once Apple starts using HBM for their designs. Right now, AMD and Intel (just today) is capable with HBM2 which will start the drama soonprisonerX - Tuesday, April 4, 2017 - link
How would HBM help? CPUs are generally not memory bandwidth constrained.zodiacfml - Tuesday, April 4, 2017 - link
Plenty to mention where the only drawback for now is cost.name99 - Tuesday, April 4, 2017 - link
Yes, but GPUs are. And this is a thread about GPUs...pjcamp - Monday, April 3, 2017 - link
Shorter Apple: "We'll use your IP until we understand how to clone it. Then FU."name99 - Monday, April 3, 2017 - link
https://arstechnica.com/apple/2016/03/apple-acquir...So you just KNOW it was Apple's fault?
vladx - Tuesday, April 4, 2017 - link
Since when IMG has to bow down to Apple and accept any offer thrown their way? IMG's IP is strong and no doubt Apple is gonna lose badly in the courts.webdoctors - Monday, April 3, 2017 - link
This is awesome news, now Apple can do for GPUs what they did for CPUs in the mobile market. Look at ARM CPUs IPC before Apple came along, it was crap compared to Intel.Look at Imagination GPUs compared to ATI/NVIDIA, its complete junk. Now we'll finally get developers creating serious games for the mobile markets instead of being second fiddle to the consoles/desktop segments. Ultimately this will force the Android HW makers to also get serious about SoC GPUs too (and catch up to the Nvidia Shield obviously).
Laststop311 - Monday, April 3, 2017 - link
imagination gpu's are not junk compared to the big discrete gpu's. They use way less power so ofc they aren't going to be as fast but watts per gflop they are doin just fineLaststop311 - Monday, April 10, 2017 - link
infact imagination GPU's are the number 1 best performing low power GPU's that were designed from the ground up to be extremely fast for low power use. They aren't meant to scale up like the big boys and the dont have have to. Specialized hardware is always going to do a better job the hardware the can scale at all power levels.tipoo - Monday, April 3, 2017 - link
How is it complete junk? They're the mobile class leader in perf/watt. They just don't compete in desktop wattages.webdoctors - Monday, April 3, 2017 - link
But they're obviously not perf/watt leaders across the entire V/f curve right?If I had a GPU that was the perf/watt leader at 10nW for VR, you wouldn't care because operating at 10nW isn't very interesting. Same way IMG's design might be fine for 1W, but if you're operating at 10W for a VR device that Apple is potentially targetting or 50W for a console or home appliance, IMG just won't cut it. Ppl who want quality VR aren't expecting to run it off just a 1W GPU.
tipoo - Monday, April 3, 2017 - link
You said "Look at Imagination GPUs compared to ATI/NVIDIA, its complete junk". Not that they don't compete in every spectrum of the wattage curve.prisonerX - Monday, April 3, 2017 - link
And it will cure cancer and everyone will join hands and sing kumbaya...willis936 - Monday, April 3, 2017 - link
>serious game devs on mobileThis is a serious case of "if we start spouting bullshit early enough we can claim we were right because people were saying it was true for years!"
Mobile games industry is same as it ever was: a wart on the ass of tv ads.
Stan11003 - Monday, April 3, 2017 - link
Couldn't Apple just buy Nvidia or for that matter Imagination. They have $215 billion in cash reserves ,Imagination's market cap is less than a billion.Kvaern1 - Monday, April 3, 2017 - link
Food for thought.Some time ago (about a year?) Nvidia announced they were willing to not only sell GPU's but also license their IP to customers, would a switch to NVidia IP fit the timetable?
Rezurecta - Monday, April 3, 2017 - link
Little update. Intel announced that they are licensing AMD now :Dfr33h33l - Monday, April 3, 2017 - link
Given IMG's designs, patent portfolio and today's stock price, wouldn't they be quite an attractive purchase to vendors like Samsung and Intel that need a boost in their own GPU development?vladx - Tuesday, April 4, 2017 - link
Intel? No, they already licensed AMD's tech and Samsung is satisfied with the graphics they get from Qualcomm and ARM. Until high quality VR becomes a real thing on phones there's simply no need for more GPU power than an Adreno 530.KoolAidMan1 - Monday, April 3, 2017 - link
iDevice's year+ performance gap over Android devices will only continue to widen. Performance is going to matter more and more as things like AR on device becomes important.techconc - Monday, April 3, 2017 - link
I don't get why people think Apple is so beholden to IMG's patents,etc. As of today, IMG's stock is down 61.6% already. Their market cap is now just $290 Million. Apple could easily buy them at a premium without a hiccup. The point being, if Apple wanted to buy them, they'd already own them. Maybe they forced this option as a negotiation tactic.Either way, Apple doesn't make strategic moves like this to save money. They do it to provide their products with a competitive advantage. Whatever Apple develops is not burdened with the need to appeal to a wide range of customers. Apple can develop exactly what they want for their chips. This should indeed be interesting.
prisonerX - Monday, April 3, 2017 - link
There are laws against Apple buying them at this point.AlexCumbers - Monday, April 3, 2017 - link
Apple should just buy them. It's a rounding error cost to them and they can then build on the platform just how they like. Seems like a no brainer, especially as the stock has crashed!madwolfa - Tuesday, April 4, 2017 - link
Indeed. Imagine the cost savings on the patents litigation alone.prisonerX - Tuesday, April 4, 2017 - link
Then imagine the cost of the stockholder litigation and subsequent payouts.NetMage - Tuesday, April 4, 2017 - link
If Apple cared, they could buy them at pre-announcement pricing but if they cared, they would own them already.Henry 3 Dogg - Monday, April 3, 2017 - link
This is not about shaving a few cents off a chip.Apple produced it's own ARM cores to have THE fastest, most efficient ARM cores.
Apple want something from their GPU that Imagination simply wasn't providing them with.
And it could be as simple as control.
- Knowing that new design releases are synchronised to their own product releases, to ensure that they are first to market.
- Knowing that their requirements of the GPU aren't giving hints to the outside world about what they are doing.
- Maybe just that IT's rate of spend on it's GPU R&D, and therefore progress, was just less than Apple wanted.
But I think that it's going to be something more than that.
I suspect Apple want one, very scalable GPU design that they can use across their product range.
And I suspect that they want fast hardware ray tracing for VR applications.
And I suspect that they want power sipping mundane 2D graphics.
And I suspect there are DSP things that they want to do for their Camera[s].
And I suspect that they want to integrate these things VERY closely together.
So closely, that it might hint where they are off to.
Torrijos - Wednesday, April 5, 2017 - link
I think this might be on the nose...It isn't an accident that Apple also invited journalists to announce their commitment to the pro market...
For me the first step will be the iMac, (iMac Pro was even mentioned)...
I think Apple wanted control because they felt that by the time they adapted the third parties GPUs to their designs, they were outdated and underpowered.
We'll probably see this year refreshed iMacs with a GPU made by Apple. Scaling between models pricing.
OpenGL, OpenCL but mainly Metal (dsp for certain task, management of screen refresh rates etc), maybe HSA (that is wishful thinking though).
Still they will hopefully allow better handling of external GPUs though (for CUDA).
Then when the data is in, they'll be able to finalise the design of the GPUs for the 2018 Mac Pros, in case anything was wrong or unexpected. Apple in the "i" era has been very iterative with great success.
Cliff34 - Monday, April 3, 2017 - link
Maybe they want to get into the VR arena and want to develop it in-house rpSupaNova - Wednesday, April 5, 2017 - link
Nope Apple has been designing its own GPU for a while, this is just the confirmation.So those looking for huge increases in speed going forward are going to be disappointed
EnzoLT - Monday, April 3, 2017 - link
I feel like this is more of a negotiation tactic than anything.sagor808 - Tuesday, April 4, 2017 - link
"We’re making a documentary about how wearable technology can help improve fitness, overall health and overcome obstacles. We’re looking for stories that are heartfelt, make you smile, that are surprising and unique. Simply losing weight isn’t enough for the stories we’re looking for. There needs to be a personal, mesmerizing, and unique aspect to each story that draws you in and makes you understand how technology can enhance your life."Tell your story.
prisonerX - Tuesday, April 4, 2017 - link
"I stuck my fitness band up my ass, and it still worked!"That's my story.
SaolDan - Wednesday, April 5, 2017 - link
Lolxype - Tuesday, April 4, 2017 - link
Last year I read reports that AMD and Apple are "cooperating on GPUs", and I assumed the thinner MBP GPUs were the result of that. But maybe those were just a first step?GraXXoR - Tuesday, April 4, 2017 - link
Now that their share price has tanked 70% they can be bought out by Apple for a fraction of their previous price. Nicely done, Apple!Zingam - Tuesday, April 4, 2017 - link
Let the patent infringement law suits begin!!!R3MF - Tuesday, April 4, 2017 - link
Given the logic expressed in this article as to why Apple pursued autonomy in the CPU space, and how the forms a lens for us to look at their GPU ambitions, surely it makes more sense for them to tank ImTech's share price... and then buy them.It gives them a world class GPU tech and patent portfolio... and their very own CPU architechure (MIPS).
SydneyBlue120d - Tuesday, April 4, 2017 - link
Could it make sense for Mediatek to buy Imagination?vladx - Tuesday, April 4, 2017 - link
It could, but I doubt it unless Imagination loses against Apple and gets valued much lower than even now.Ananke - Tuesday, April 4, 2017 - link
Apple made two large R&D investments in China, and last week agreed to two more R&D facilities in China. All these engineers and billion dollar investments there should be kept busy, and Apple will need to cut somewhere in the US.If Apple doesn't invest in China, and eventually doesn't transfer know how and technology there, Chinese will shut the market for them - like they did several times already, with "iStore" outages :) :)
DezertEagle - Tuesday, April 4, 2017 - link
I'd like to believe this is a smart move considering Imagination Technologies just brokered a deal w/ Qualcomm and because Apple's HW secrets seem to be compromised by agencies affiliated w/ the CIA.Ever since 2011, the year Microsoft/Nvidia turned it's graphics API's (i.e. DirectX 11+) into undetectable ominous spyware, I've been increasingly paranoid these outsourced GPU/SoC architectures have opened up arbitrary but functional methods for injecting unmanaged code into their host operating systems. I've read several stories of Qualcomm having an extensive dirty little past of surveillance secrets embedded in their devices.
So my major point of contention here is: If IMG Tech uses the same PowerVR architecture for both Apple and Qualcomm/Android, would this enable hacked Qualcomm devices to become a test bench for designing software & firmware to infect Apple products?
HomeworldFound - Tuesday, April 4, 2017 - link
You really don't need a hardware back door on Apples devices now, the best attacks are performed on automatic backups made by iTunes when a device is connected to the computer or when a device uploads data into the cloud. Apple gets to say it's device security is unaffected by the FBI/CIA while they get basically any information they want with some brute force and court orders.Apple intentionally made the encryption applied to those backups weaker. Apple got it's way
Laststop311 - Monday, April 10, 2017 - link
Thats why you disable icloud backupsMeteor2 - Wednesday, April 5, 2017 - link
'paranoid'-- you used the correct word there.
pravakta - Friday, April 7, 2017 - link
"Imagination Technologies just brokered a deal w/ Qualcomm"What is the source of this info? Where Qualcomm will use IMG GPU? What about their own Adreno?
mrpii - Tuesday, April 4, 2017 - link
I have to wonder if this isn't about being competitive with CUDA/OpenCL engines that run on GPU. Not only can this dramatically speed up some compression and encoder algorithms running on a GPU, but right now on a Mac, there is no support for things like Tensorflow running on an Imagination GPU. Apple hardware can't play along.Torrijos - Wednesday, April 5, 2017 - link
Actually OpenCL on the macPro was one thing that was working and using both GPUs.The problem then was that the tech wasn't properly used by software developers and such and then the GPUs didn't evolve.
Check the Luxmark 3 Benchmarks
http://barefeats.com/hilapmore.html
Apple's Final Cut Pro too is optimised etc.
Apple's true failure was believing that the entire industry was going to switch towards more abstraction of the hardware so that 2 GPUs could fight with single GPUs.
HSA still is in the future...
crossFire and SLI look to be dying...
Apple now feels it's time to customer design it and ensure they aren't dependent on other's choices.
BillBear - Wednesday, April 5, 2017 - link
You can do neural network training in Tensorflow and then export that neural net to run under Apple's Basic Neural Network Subroutines API which was added in the last version of MacOS and iOS.https://www.bignerdranch.com/blog/use-tensorflow-a...
easp - Thursday, April 6, 2017 - link
Apple may well have some of their own graphics IP to bargain with. Technically, this isn't their first GPU. They shipped the short-lived QuickDraw 3D accelerator in 1995.farloo - Tuesday, April 11, 2017 - link
Looks like the future of Apple is ARM, even for their laptops and PCs. With computing relying less and less on <a href="http://www.emobly.com/">CPUs</a> and more on dedicated processing units, the in-house GPU thing makes sense.amosbatto - Wednesday, April 12, 2017 - link
In no way will this save Apple money when they are currently paying Imagination around 30 cents per device. For Apple, that is pocket change. It is probably worth $60 million per year for Apple to pay Imagination to avoid lawsuits over the IP, because they will need to defend against AMD, nVidia, Intel and Qualcomm as well, which means the need to be able to counter sue using Imagination's patents. Otherwise, Apple is a prime target for the picking.msroadkill612 - Monday, May 1, 2017 - link
Whatever, what I see is the prescience of AMD in seeing correctly (IMO), that the future belonged to having a presence in both vital processors to IT solutions - cpu & gpu, ~15 years ago. (formally bought ATI ~2005)Now we have apple and intel (that recent israeli company buy) in a mad scramble to catch up.
msroadkill612 - Monday, May 1, 2017 - link
A beancounter footnote is that the automotive IT markey is forecast at $120B by 2020, dwarfing all current IT markets.This is largely about converting the output of ~camera sensors ~vast outputs into usable AI for; better, safer, and even, autonomous driving. CPU plays a subsidiary role in this.
Time to market is a critical factor. AMD is by far best placed to offer automotive engineers a unified architecture and set of developer tools.
Dealing with one set of dilberts, is always better than two.
CodeJingle - Wednesday, July 12, 2017 - link
not a bombshell apple plans these things years in advance. the imagination gpu ip is lackluster with zero support for native FP64tomabc - Monday, September 4, 2017 - link
There is a big hole between PowerVR GPU and current market players(NV). Apple need more scalable, flexible and more efficient GPU for future product. Look at Apple CPU, in one core efficiency it's close to Intel, we don't know what happens when Apple multi core CPU or implement multithreading.