Well this was unexpected. The Artemis and Maya(likely successors to the A53/57) CPU cores are still yet to be announced despite of ARM clearly hinting about them for months now.
Maybe Artemis and Maya are actually the two video processors also announced today?
I hope that's not true though. It is ime for the new CPU announcement as well. The A5x have reached the market, and unless ARM significantly speeds up the product cycles, it could find itself behind the curve ( Krait successor should be out next year, and Cyclone and Denver are already very competitive, with their superior single threaded performance)
Wait, when did we ever expect them to be ahead of the curve? The only reason the curve looks different is that Apple's A7 came out last year ahead of anyone's expected timetable. Had it been released this year (as the industry expected) it would have come out alongside the A53/A57 parts and the Tegra K1 Denver part, and only a couple quarters ahead of Krait.
ARM doesn't need to be ahead of the curve, per se, it only needs to be competitive, and until the Denver moves down to smartphones and the Krait comes out, there is no part more suitable for smartphones than the A5X cores.
Well that's the thing, judging by previous record it takes at least two years for newly announced ARM CPU to reach the market. And the successor to Krait should show up in phones much sooner than that, supposedly in about one year.
Right, but that's only because Apple upset the cart. Had the A7 been released this year the Krait would have hustled and been out in 2017, and everything would be as expected. However, Apple released the A7 in 2013, everyone said, "OH NO", the Denver got out this year (hotter than expected) and the Krait in 2015 instead.
"the throughput of a T860 design can scale from 20 FLOPs (10 MADs) and 1 texel per clock up to 320 MADs and 16 texels per clock." - it should be 160 MADs.
Not qualcomm? Guess all those cheap SoCs from mediatek, rockchip and allwinner with mali-400/450 helped with that, unless by "IP vendor", they meant those who license their GPUs to third-parties, which only leaves Imagination Technologies (PowerVR), right?
Well, smallest possible sounds smaller than minimal, so I guess they just mean the most performance they could get out of a teensy die for that case, while not quite so small for the other.
I keep wondering when or if Intel will decide to license some of these GPU architectures and start integrating them into their Core or Atom lineups. I imagine that Intel doesn't "need" to pay someone else for a graphics solution since they already build their own (and the HD series GPU cores are decent, but still way behind AMD and nVidia). It would be nice to see Intel tack on a GPU that is actively trying to compete with other big dogs in the GPU space.
They used PowerVR in a few atoms. They never worked well; powerVRs drivers were worse for gaming than Intels on Windows. It was even worse on Linux because PowerVR only dropped a single binary driver for whatever kernel was current when the chip launched and then refused to fix any of the major bugs or even just recompile for newer kernels. After that experience I doubt Intel is going to go back to using a 3rd party GPU again.
I do have one of those cursed Atoms with the Poulsbo video chip (Acer Aspire One netbook). You're right, that was a nightmare - horrendous performance, no driver support, I had to work a few hours of dark magic to get Ubuntu or Mint to display to the screen correctly. Still, I would love to see Intel develop a chip with a GPU that comes close to being in the same class as the nVidia or AMD graphic subsystems.
I would love for Intel to bite the bullet and license Nvidia's graphic IP, either Kepler (a good start) or even Maxwell. Nvidia understood quite a while ago that most of the magic happened in the drivers and have dedicated huge resources to it. An intel processor with integrated nvidia video would be quite nice, particularly on Atom. HTPC or media stick in a single SoC...
I agree though, a Broadwell/Skylake CPU with a Maxwell/Pascal integrated GPU on Intel's 14nm/10nm process would be really amazing in both performance and power efficiency for mainstream devices. Add stacked DRAM to that with some unified memory goodness and it'll give mid-range dedicated GPUs a run for their money (probably even some of the high end).
So if a T860 has 320 flops per clock, and if they run at say 1GHZ, that gives 320 gflops. That would be similar to a Geforce 820M (276.1-366.3 gflops according to wikipedia).
I guess it's gonna be a while before we can have Crysis-like games on a tablet/chromebook.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
darkich - Tuesday, October 28, 2014 - link
Well this was unexpected.The Artemis and Maya(likely successors to the A53/57) CPU cores are still yet to be announced despite of ARM clearly hinting about them for months now.
przemo_li - Tuesday, October 28, 2014 - link
Not really.GPU division was split from CPUs some time ago. Now those are two separate entities. They pursue different schedules and different strategies.
darkich - Tuesday, October 28, 2014 - link
Maybe Artemis and Maya are actually the two video processors also announced today?I hope that's not true though.
It is ime for the new CPU announcement as well. The A5x have reached the market, and unless ARM significantly speeds up the product cycles, it could find itself behind the curve ( Krait successor should be out next year, and Cyclone and Denver are already very competitive, with their superior single threaded performance)
michael2k - Tuesday, October 28, 2014 - link
Wait, when did we ever expect them to be ahead of the curve? The only reason the curve looks different is that Apple's A7 came out last year ahead of anyone's expected timetable. Had it been released this year (as the industry expected) it would have come out alongside the A53/A57 parts and the Tegra K1 Denver part, and only a couple quarters ahead of Krait.ARM doesn't need to be ahead of the curve, per se, it only needs to be competitive, and until the Denver moves down to smartphones and the Krait comes out, there is no part more suitable for smartphones than the A5X cores.
darkich - Wednesday, October 29, 2014 - link
Well that's the thing, judging by previous record it takes at least two years for newly announced ARM CPU to reach the market.And the successor to Krait should show up in phones much sooner than that, supposedly in about one year.
michael2k - Wednesday, October 29, 2014 - link
Right, but that's only because Apple upset the cart. Had the A7 been released this year the Krait would have hustled and been out in 2017, and everything would be as expected. However, Apple released the A7 in 2013, everyone said, "OH NO", the Denver got out this year (hotter than expected) and the Krait in 2015 instead.Tigran - Tuesday, October 28, 2014 - link
And what does "scalable to up to 16 [shader] cores"? Isn't there a definite number (which one?) of shader cores in Mali-T860?spellegrini - Tuesday, October 28, 2014 - link
The design supports up to 16 cores. Said that, who implements the design (e.g. Samsung LSI) will decide how many cores to configure the GPU with.Tigran - Tuesday, October 28, 2014 - link
Thank you. And how can I know number of shader cores in some particular SoC, is there any abbreviation in it's name?Ryan Smith - Tuesday, October 28, 2014 - link
Yes. The typical nomenclature is [GPU model]MP[number of cores. e.g. Mali-T860MP8Tigran - Tuesday, October 28, 2014 - link
Thank you. Still I wonder what's the deference between Mali-T860 and Mali-T760 (not Mali-T628)...Ryan Smith - Tuesday, October 28, 2014 - link
10-bit YUV support and better energy efficiency, though by how much we couldn't say.kron123456789 - Tuesday, October 28, 2014 - link
power efficiency, maybe? Because it's still 320 FLOPs and 16 texels per clock.MrSpadge - Tuesday, October 28, 2014 - link
No. The licensees can choose how many shaders they want to equip the basic design with.kron123456789 - Tuesday, October 28, 2014 - link
"the throughput of a T860 design can scale from 20 FLOPs (10 MADs) and 1 texel per clock up to 320 MADs and 16 texels per clock." - it should be 160 MADs.yowanvista - Tuesday, October 28, 2014 - link
I wonder if they'll ever release blobs capable of supporting full desktop OpenGL4.x for that series.eddman - Tuesday, October 28, 2014 - link
"#1 GPU IP vendor for android"Not qualcomm? Guess all those cheap SoCs from mediatek, rockchip and allwinner with mali-400/450 helped with that, unless by "IP vendor", they meant those who license their GPUs to third-parties, which only leaves Imagination Technologies (PowerVR), right?
Ryan Smith - Tuesday, October 28, 2014 - link
"unless by "IP vendor", they meant those who license their GPUs to third-parties"Correct.
Yojimbo - Tuesday, October 28, 2014 - link
Mali-T830 GPU Maximal performance from minimal silicon areaMali-T820 GPU Best performance from smallest possible silicon area
Are these two the same GPU or is their marketing self-contradictory?
tipoo - Friday, October 31, 2014 - link
Well, smallest possible sounds smaller than minimal, so I guess they just mean the most performance they could get out of a teensy die for that case, while not quite so small for the other.knightspawn1138 - Tuesday, October 28, 2014 - link
I keep wondering when or if Intel will decide to license some of these GPU architectures and start integrating them into their Core or Atom lineups. I imagine that Intel doesn't "need" to pay someone else for a graphics solution since they already build their own (and the HD series GPU cores are decent, but still way behind AMD and nVidia). It would be nice to see Intel tack on a GPU that is actively trying to compete with other big dogs in the GPU space.DanNeely - Tuesday, October 28, 2014 - link
They used PowerVR in a few atoms. They never worked well; powerVRs drivers were worse for gaming than Intels on Windows. It was even worse on Linux because PowerVR only dropped a single binary driver for whatever kernel was current when the chip launched and then refused to fix any of the major bugs or even just recompile for newer kernels. After that experience I doubt Intel is going to go back to using a 3rd party GPU again.knightspawn1138 - Tuesday, October 28, 2014 - link
I do have one of those cursed Atoms with the Poulsbo video chip (Acer Aspire One netbook). You're right, that was a nightmare - horrendous performance, no driver support, I had to work a few hours of dark magic to get Ubuntu or Mint to display to the screen correctly. Still, I would love to see Intel develop a chip with a GPU that comes close to being in the same class as the nVidia or AMD graphic subsystems.frenchy_2001 - Tuesday, October 28, 2014 - link
I would love for Intel to bite the bullet and license Nvidia's graphic IP, either Kepler (a good start) or even Maxwell. Nvidia understood quite a while ago that most of the magic happened in the drivers and have dedicated huge resources to it. An intel processor with integrated nvidia video would be quite nice, particularly on Atom. HTPC or media stick in a single SoC...lilmoe - Tuesday, October 28, 2014 - link
Or pull an AMD and buy NVidia all together.I agree though, a Broadwell/Skylake CPU with a Maxwell/Pascal integrated GPU on Intel's 14nm/10nm process would be really amazing in both performance and power efficiency for mainstream devices. Add stacked DRAM to that with some unified memory goodness and it'll give mid-range dedicated GPUs a run for their money (probably even some of the high end).
ant6n - Wednesday, October 29, 2014 - link
So if a T860 has 320 flops per clock, and if they run at say 1GHZ, that gives 320 gflops. That would be similar to a Geforce 820M (276.1-366.3 gflops according to wikipedia).I guess it's gonna be a while before we can have Crysis-like games on a tablet/chromebook.
vFunct - Thursday, October 30, 2014 - link
If that's supposed to be their high-end part, it already seems outdated for something that should arrive in devices in about 3 years.Apple's A8X in the iPad is good for about 200-300GFlops, and the Tegra K1 is at 350+ Gflops.. and both of those are out right now.
kron123456789 - Sunday, November 2, 2014 - link
But, 820M is the same GPU as 720M and 620M and it's 4 year old Fermi.