How does this compare to Nvidia's new K1(roughly based on the figures quoted)? Right now I'm getting the feeling that nvidia's only significant advantage is in it's ability to port games due to it's scaled down desktop tech.
If you believe the 40% improvement for 805, it puts 805 and K1 roughly neck and neck. We'll have to wait for devices to start shipping to know for sure.
In theory K1 would have that advantage but it would likely only be a real advantage with Windows as Android would use different API's than used by Windows/most of Nvidia's desktop tech. It will be interesting to see how things settle in the coming months and who gets the design win. It sounded like in other releases Qualcomm was wanting to hold their cards tighter than they had previously. If Samsung is going 64-bit Exynos for the S5 then it would make sense for the North American version to be 64-bit as well which means K1 could be in the running tho a mere "Dual Core" might be a no go for Samsung. I can see Qualcomm really wanting to get their 810 out by then and making for an impressive launch. Otherwise an 805 might make for at least a competitive replacement but makes for a less impressive only kind of 64 bit launch.
> Apple likely moving more aggressively after the milder than expected GPU update in the A7.
How can Apple move more aggressively if they're still dependent on what ImagTech does. Considering the Series 6XT refresh being a quicker version (rumored to be 50% faster) of the same Series 6 that iPad already have (it took them two years to increase it by 50% in 6XT), I don't think Apple will beat either K1 or Qualcomm's Adreno GPUs this year.
Someone have not been paying attention. Apple have been hiring gpu engineers for the past two years. Talks are they will bring it in house like they did with their cpu. If that is indeed the case then I will look for apple to bring the heat.
I missed the news about Apple hiring recently in Orlando for its own GPU designs, now it makes sense and it is going to be interesting.
Apple do have a few starting designs that they can work off for now, Series 6 from ImagTech, Mali from ARM or nVidia's GPU as they offer a broad IP licensing program.
Once they hit it off, then the GPU becomes more customized like the evolution of A5>A7 chips.
This could even be more interesting. Remember that Apple owns 11% of Imagination, and Intel owns 15%, which they bought right after Apple.
Could Apple, and possibly Intel, convince Imagination to give one, or both, an architecture license, allowing them to build their own GPU's, as Apple does now with ARM? That could certainly be a reason why Apple is hiring GPU engineers. Coming out with a GPU from scratch? I'm not so sure Apple would want to try that. It could take years before they could get that right.
"It could take years before they could get that right."
(a) You are assuming that they are starting this today. Who's to say they didn't start it YEARS ago? Maybe the GPU team began its work at the same time as the Swift team, and are just taking a little longer?
(b) You have to look at the big picture. Current Apple does not do things randomly, without a plan. They probably have a SoC plan stretching out ten years. If you were planning for ten years of SoCs, what would you want? Having their own GPU allows them to get their on their schedule.
As I've said before, I expect the A8 to be basically Apple's Sandy Bridge. They'll stick with the A7 core, there'll probably be MHz improvements from a switch to 20nm, but the big work will be adding their own custom GPU along with a Nehalem/Sandy Bridge type architecture, so a ring tying together cores, L3 slices, GPU and memory controller. They *might* add a third core depending on their expectations for how well their high end apps like Aperture and Final Cut (or heck, even iPhoto and iMovie) might use such a core.
Rather less gee whiz, but I expect the M8 will also be a custom Apple core (effectively the companion core for the A8, but with a different philosophy, and essentially SW managed rather than OS or HW managed) and the M8 will live on the same SoC.
But have they bought any GPU companies. On the CPU side they purchases P.A. Semi. GPU engineers would always been needed by a systems/SCO vendor to optimize and integrate GPUs into their systems, so unless Apple was hiring GPU engineers in large numbers and has been head hunting the big industry talent, you're drawing the wrong conclusions. As with CPUs, GPUs are iterative designs that are refined over a number of years and even the mighty Apple wouldn't be able to engineer one from scratch, in house (without acquisition) and be able to better the industry leaders, without breaking patents held by others. Apple owns about 10% of Imgtec, but that wouldn't give them access to the intellectual property and patents.
It can make sense for apple to design their own gpu arch because their goals are different the ImagTech. They can cater purely for themselves which means they can spend more on die area and power efficiency for their design as well as have the arch coincide with their release cycle. Unlike many of their competitors, Apple is willing to pay more for their soc because they don't need to make a profit on that specific part. Hence A7.
They did, years ago. But nothing came of it, as far as I know. But, Apple is intimately familiar with Imagination's IP. And since they own part of the company, it would make sense for them to type to pursue an architectural license, as I mentioned in my previous post. It very possible they may have obtained that already. Or, if not, then to add features to Imagination's own GPU that would be sold to Apple.
I believe that Apple is the largest buyer of a single SoC design each year. The same one for the iPhone and iPad models. No one else sells nearly as many units of an individual model as does Apple. That gives them an advantage in discussions for a design.
Apple have plenty of experience adding Imgtec GPUs into their SoCs, but this doesn't mean that they have the vast amount of engineering experience required to optimize or even improve a rendering pipeline. Look at how long it has taken Intel to get a handle on integrated graphics and they are CPU gods! No doubt Apple influences Imgtecs decisions when it comes to picking market segments for parts, but the cumulative years of engineering time and experience can't just be picked up by a handful of Apple engineers, regardless of whether Imgtec or anyone else would be willing to sell an architectural license.
There are also higher end SKU's avail from ImagTech, even ones in the original Series 6 lineup. I believe they are only using a 4"core" version of the GPU, and there are SKU's for up to 6 "cores".
By looking at the IP video up-conversion test above i can illustrate that 1080p is far superior than the converted 4k video. You can clearly see that there is a notable brightness loss in conversion to 4k video. Most of the content watched on tablets are not 1080p since there is no such 1TB & 10Tb space available in tablets, its but obvious that we would like to watch more 480p,570p or 720p which are HD, SD & Below content on them. Its important to see how they perform in those content types? I generally dont prefer conversions. Simply play in their native resolutions.
Okay, I have a Nexus 5 with 1920 x 1080 or 2mp in size. If I have a 2560 x 1440 (what this years phones are heading for) screen then that's 3.6~ M. This is an 80% increase in pixels. Surely that negates the gaming performance when it's only 40% faster than the 800.
I don't know about you but i'd rather stick with 1080p and have an adreno 420.
I know that Anand doesn't want to hurt Qualcomm but man, Unreal Engine 3... so 2010... when main competitor, Nvidia, shows Unreal Engine 4 on K1 !!! It took time, 4 generations, but this round nobody can argue how retarded is Adreno 420 and how advanced is K1...
Do t be so hasty. Until we see K1 in a real, shipping model, we should be skeptical of anything Nvidia says, or shows. We've been burned in the past by Nvidia, pretty much all of the time.
Isn't it strange that they promote the 805 being able to scale to 4K very well because of the HQV technology and then show a chip for TV's based on older technology, that doesn't have the HQV scaling?
I was thinking the same thing, but then again it mentions the 802 as being a "Variant" of the 800 SoC..I know it still uses the 330 GPU, but who knows what (if any) extra magic Qualcomm put into it? I guess until they are officially reviewed we won't for sure..
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
26 Comments
Back to Article
neoraiden - Tuesday, January 14, 2014 - link
How does this compare to Nvidia's new K1(roughly based on the figures quoted)? Right now I'm getting the feeling that nvidia's only significant advantage is in it's ability to port games due to it's scaled down desktop tech.Loki726 - Tuesday, January 14, 2014 - link
If you believe the 40% improvement for 805, it puts 805 and K1 roughly neck and neck. We'll have to wait for devices to start shipping to know for sure.nathanddrews - Tuesday, January 14, 2014 - link
Take with a grain of salt, but check out the K1 performance on a native 4K tablet:http://www.tomshardware.com/news/lenovo-thinkvisio...
Poik - Tuesday, January 14, 2014 - link
In theory K1 would have that advantage but it would likely only be a real advantage with Windows as Android would use different API's than used by Windows/most of Nvidia's desktop tech. It will be interesting to see how things settle in the coming months and who gets the design win. It sounded like in other releases Qualcomm was wanting to hold their cards tighter than they had previously. If Samsung is going 64-bit Exynos for the S5 then it would make sense for the North American version to be 64-bit as well which means K1 could be in the running tho a mere "Dual Core" might be a no go for Samsung. I can see Qualcomm really wanting to get their 810 out by then and making for an impressive launch. Otherwise an 805 might make for at least a competitive replacement but makes for a less impressive only kind of 64 bit launch.MikhailT - Tuesday, January 14, 2014 - link
> Apple likely moving more aggressively after the milder than expected GPU update in the A7.How can Apple move more aggressively if they're still dependent on what ImagTech does. Considering the Series 6XT refresh being a quicker version (rumored to be 50% faster) of the same Series 6 that iPad already have (it took them two years to increase it by 50% in 6XT), I don't think Apple will beat either K1 or Qualcomm's Adreno GPUs this year.
toukale - Tuesday, January 14, 2014 - link
Someone have not been paying attention. Apple have been hiring gpu engineers for the past two years. Talks are they will bring it in house like they did with their cpu. If that is indeed the case then I will look for apple to bring the heat.MikhailT - Tuesday, January 14, 2014 - link
I missed the news about Apple hiring recently in Orlando for its own GPU designs, now it makes sense and it is going to be interesting.Apple do have a few starting designs that they can work off for now, Series 6 from ImagTech, Mali from ARM or nVidia's GPU as they offer a broad IP licensing program.
Once they hit it off, then the GPU becomes more customized like the evolution of A5>A7 chips.
melgross - Sunday, January 19, 2014 - link
This could even be more interesting. Remember that Apple owns 11% of Imagination, and Intel owns 15%, which they bought right after Apple.Could Apple, and possibly Intel, convince Imagination to give one, or both, an architecture license, allowing them to build their own GPU's, as Apple does now with ARM? That could certainly be a reason why Apple is hiring GPU engineers. Coming out with a GPU from scratch? I'm not so sure Apple would want to try that. It could take years before they could get that right.
name99 - Monday, January 20, 2014 - link
"It could take years before they could get that right."(a) You are assuming that they are starting this today. Who's to say they didn't start it YEARS ago? Maybe the GPU team began its work at the same time as the Swift team, and are just taking a little longer?
(b) You have to look at the big picture. Current Apple does not do things randomly, without a plan. They probably have a SoC plan stretching out ten years. If you were planning for ten years of SoCs, what would you want? Having their own GPU allows them to get their on their schedule.
As I've said before, I expect the A8 to be basically Apple's Sandy Bridge. They'll stick with the A7 core, there'll probably be MHz improvements from a switch to 20nm, but the big work will be adding their own custom GPU along with a Nehalem/Sandy Bridge type architecture, so a ring tying together cores, L3 slices, GPU and memory controller. They *might* add a third core depending on their expectations for how well their high end apps like Aperture and Final Cut (or heck, even iPhoto and iMovie) might use such a core.
Rather less gee whiz, but I expect the M8 will also be a custom Apple core (effectively the companion core for the A8, but with a different philosophy, and essentially SW managed rather than OS or HW managed) and the M8 will live on the same SoC.
mikehunt69 - Wednesday, January 15, 2014 - link
But have they bought any GPU companies. On the CPU side they purchases P.A. Semi.GPU engineers would always been needed by a systems/SCO vendor to optimize and integrate GPUs into their systems, so unless Apple was hiring GPU engineers in large numbers and has been head hunting the big industry talent, you're drawing the wrong conclusions.
As with CPUs, GPUs are iterative designs that are refined over a number of years and even the mighty Apple wouldn't be able to engineer one from scratch, in house (without acquisition) and be able to better the industry leaders, without breaking patents held by others.
Apple owns about 10% of Imgtec, but that wouldn't give them access to the intellectual property and patents.
theCuriousTask - Wednesday, January 15, 2014 - link
It can make sense for apple to design their own gpu arch because their goals are different the ImagTech. They can cater purely for themselves which means they can spend more on die area and power efficiency for their design as well as have the arch coincide with their release cycle. Unlike many of their competitors, Apple is willing to pay more for their soc because they don't need to make a profit on that specific part. Hence A7.melgross - Sunday, January 19, 2014 - link
They did, years ago. But nothing came of it, as far as I know. But, Apple is intimately familiar with Imagination's IP. And since they own part of the company, it would make sense for them to type to pursue an architectural license, as I mentioned in my previous post. It very possible they may have obtained that already. Or, if not, then to add features to Imagination's own GPU that would be sold to Apple.I believe that Apple is the largest buyer of a single SoC design each year. The same one for the iPhone and iPad models. No one else sells nearly as many units of an individual model as does Apple. That gives them an advantage in discussions for a design.
mikehunt69 - Wednesday, February 5, 2014 - link
Apple have plenty of experience adding Imgtec GPUs into their SoCs, but this doesn't mean that they have the vast amount of engineering experience required to optimize or even improve a rendering pipeline. Look at how long it has taken Intel to get a handle on integrated graphics and they are CPU gods!No doubt Apple influences Imgtecs decisions when it comes to picking market segments for parts, but the cumulative years of engineering time and experience can't just be picked up by a handful of Apple engineers, regardless of whether Imgtec or anyone else would be willing to sell an architectural license.
extide - Tuesday, January 14, 2014 - link
There are also higher end SKU's avail from ImagTech, even ones in the original Series 6 lineup. I believe they are only using a 4"core" version of the GPU, and there are SKU's for up to 6 "cores".ruzveh - Wednesday, January 15, 2014 - link
By looking at the IP video up-conversion test above i can illustrate that 1080p is far superior than the converted 4k video. You can clearly see that there is a notable brightness loss in conversion to 4k video. Most of the content watched on tablets are not 1080p since there is no such 1TB & 10Tb space available in tablets, its but obvious that we would like to watch more 480p,570p or 720p which are HD, SD & Below content on them. Its important to see how they perform in those content types? I generally dont prefer conversions. Simply play in their native resolutions.colinw - Wednesday, January 15, 2014 - link
...what picture are you looking at? Both sides are being upscaled. The HQV looks much better.You would prefer to see a 480p video in a tiny little thumbnail rather than upscale it?
nathanddrews - Wednesday, January 15, 2014 - link
While HQV makes some decent scalers, I'm not convinced on this one. Contrast boosting and edge enhancement are not "improvements".twizzlebizzle22 - Wednesday, January 15, 2014 - link
Okay, I have a Nexus 5 with 1920 x 1080 or 2mp in size. If I have a 2560 x 1440 (what this years phones are heading for) screen then that's 3.6~ M. This is an 80% increase in pixels. Surely that negates the gaming performance when it's only 40% faster than the 800.I don't know about you but i'd rather stick with 1080p and have an adreno 420.
Or am I missing a trick here?
SR81 - Thursday, January 16, 2014 - link
I'm curious why not do like the consoles and render at a lower res and upscale to the phones native res.jimjamjamie - Sunday, January 19, 2014 - link
I know that I have seen some mobile games do this (or at least have the option to) already. The problem is that it looks pretty rubbish.Enkur - Wednesday, January 15, 2014 - link
Anyone know what game is that in the 5th pictureSR81 - Thursday, January 16, 2014 - link
Injustice: Gods Among UsArthurG - Wednesday, January 15, 2014 - link
I know that Anand doesn't want to hurt Qualcomm but man, Unreal Engine 3... so 2010... when main competitor, Nvidia, shows Unreal Engine 4 on K1 !!! It took time, 4 generations, but this round nobody can argue how retarded is Adreno 420 and how advanced is K1...melgross - Sunday, January 19, 2014 - link
Do t be so hasty. Until we see K1 in a real, shipping model, we should be skeptical of anything Nvidia says, or shows. We've been burned in the past by Nvidia, pretty much all of the time.adboelens - Monday, January 20, 2014 - link
Isn't it strange that they promote the 805 being able to scale to 4K very well because of the HQV technology and then show a chip for TV's based on older technology, that doesn't have the HQV scaling?C.C. - Thursday, January 23, 2014 - link
I was thinking the same thing, but then again it mentions the 802 as being a "Variant" of the 800 SoC..I know it still uses the 330 GPU, but who knows what (if any) extra magic Qualcomm put into it? I guess until they are officially reviewed we won't for sure..