The number of hoops nVidia would need to jump through just to support the few CDMA2000 carriers like Verizon, Sprint, and KDDI just isn't worth the cost. Plus, they'd have to pay licensing fees up the wazoo to Qualcomm for the CDMA2000 patents and everyone is already making a clear move to LTE.
It's a software defined radio; so they'd only need to pay CDMA royalties on variants targeted at CDMA carriers. It's probably a year or two early to make LTE only products for any of them. VZW's planning to have it's national LTE rollout completed by the end of the year; but Sprint's not only farther behind, but outside of the cities their 3g coverage is often roaming on VZWs network and the latter isn't sharing LTE with anyone else yet. It's possible that Softbank's cash infusion will be spent significantly expanding Sprint's native network footprint; but if so I'd've expected them to have started talking doing so up already.
On farther thought, with MVNO's CDMA is likely to linger as a requirement for longer. VZW isn't letting it's MVNO have access to LTE at all yet; and AFAIK isn't letting Sprint MVNO's roam on anything except 2g for voice/sms usage; the latter will probably end up requiring CDMA support of some sort until VZW starts retiring it's legacy networks entirely to free up additional spectrum for LTE.
I just found this blog and have high hopes for it to continue. Keep up the great work, its hard to find good ones. I have added to my favorites. And the concept of Thank You. <a href="http://www.bestarticlewritingservice.net/">... article writing service</a>
I think you almost get the best of both worlds (synchronous vs asynchronous) with this. If ever I have a single threaded compute heavy task, i can run it on the companion core without having to ramp up all 4 of the other cores. While at the same time not needing all the additional hardware to have 4 independent power sources for asynch.
Hmm... I'm not sure if this was the case with the Tegra 3 but it looks like for the T4 you can enable and disable each of the 4 cores independently so a single threaded compute heavy task could run fine on one of the main cores with the remaining 3 disabled. So not sure if what I said above really comes into play then.
I wonder why Nvidia didn't just go with big.Little. The choice is a little strange compared to big.Little's Cortex A7 core(s). I mean even if it's low-locked (I assume, like Tegra 3 one was), does it consume less power than 2-4 Cortex A7's running at 1-1.2 Ghz? Or maybe they optimized slightly more for performance instead of energy efficiency at the "low-end"?
I'm guessing T4 may have been too far into development to change when ARM announced big.little; and with nVidia's consistently slipping mobile timeline adding additional changes/delays would have been a bad thing. I suppose we'll find out what nvidia thinks about the concept more definitively when T5 is announced next year; but a 1:1 mapping between low and high performance cores seems like a waste of die space to me on a quadcore design since you only really need 4 cores for heavy loads to begin with.
But then they could do that with any of the other four core. What's the point in a fifth low power core if it's still an A15? I guess it probably runs on lower power optimized silicon, but the power difference probably won't be as big as the older big.Little implementation with a much smaller fifth core.
Almost every multicore ARM SoC out there, including quad cores like Tegra 3 and Exynos 4412, already lets you individually power gate each core. The only SoC I can think of that didn't have this feature was Tegra 2.
If this works anything like it did on Tegra 3 you won't be able to run the 4 full frequency cores at the same time as the companion core, so there'd be nothing asynchronous about it.
The whole thing sounds weird to me, especially without any manufacturing differentiation for the fifth core this time (I hear that was expensive). Maybe a power optimized A15 is a lot different from a frequency optimized one at the same frequencies, I don't really know.. previous cases from ARM don't seem to be that incredibly dramatic..
Do we know anything about its API support? OpenGL ES 3.0? OpenCL? CUDA? Anything?
I found it strange that he talked about "computational photography" but not even once mentioned OpenCL or CUDA. Nvidia is being a little odd not revealing much technical stuff about Tegra 4.
The GPU is non-unified. This almost certainly excludes OpenCL; OpenGL ES 3.0 we still need confirmation on. However we've yet to see a non-unified OpenGL ES 3.0 (or OpenGL 3.0) GPU, as unification and that level of technology have previously gone hand-in-hand.
I was pretty sure they were going to unify it now because they needed to do that! Everyone else is doing it, so it seems pretty insane not to do it.
But in the same time I knew from the beginning that if they change the architecture now, they will still need to change it again in 2014 - and that seems rather soon, to do it again after a year.
Why change again in 2014? Because I think the Tegra 5 might be part of Project Denver, with custom CPU design, and new GPU architecture, to make it all streamlined from smartphones to servers and super-computers.
But it seems changing it in 2 years was going to be too costly for them, so they decided to just skip the architectural change now, and just do the 2014 one with the 64 bit CPU's. But what does this mean now - that Tegra 4 won't have OpenGL ES 3.0? Or it won't be as efficient? This isn't very good news.
I also still don't think I like the idea of a single A15 core acting working in low-performance mode vs 4 Cortex A7 cores. That basically just sounds like some sort of Intel Turbo-Boost. But to me the simple chip base for low performance sounds like a better idea than using a complex single core chip for it. Hopefully we'll see some data when Anand reviews both Tegra 4 and the upcoming Exynos 5 Quad (which I assume is the one with 2 clusters of 4 A15 and 4 A7's).
Huh, I thought T4 was supposed to bring in Nvidias newer unified architectures from the desktop/laptop size. T5 or T4 Plus then? I'm a little dissapointed. SGX has had them for a while if I'm not mistaken. Can't wait for benchmarks though.
Don't quote me on this but I thought I read it had 45% *lower* average power consumption than Tegra 3, which would be good for phones. I said the same thing when Tegra 3 launched, I thought it would be too power hungry for phones, but it managed.
"As far as Shield goes, I wanted to correct one thing about how the PC display streaming works. The PC will stream to the display directly, not through Shield. Shield will pass controller commands to the PC. "
This makes no sense. How will the PC stream video to the display?
During the demo you can see that the HDMI cable is connected to the Shield.
Anand, Brian, you say "It's 28nm HPL [...] This helps explain the 1.9GHz max frequency for the A15s in Tegra 4." This doesn't make much sense to me since HPL the slower than HPM. Why should HPL help to explain the 1.9GHz ... are you shure that it's really HPL?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
34 Comments
Back to Article
Auzo - Monday, January 7, 2013 - link
Does the i500 modem work with CDMA2000?jeffkibuule - Monday, January 7, 2013 - link
The number of hoops nVidia would need to jump through just to support the few CDMA2000 carriers like Verizon, Sprint, and KDDI just isn't worth the cost. Plus, they'd have to pay licensing fees up the wazoo to Qualcomm for the CDMA2000 patents and everyone is already making a clear move to LTE.My guess is no.
DanNeely - Monday, January 7, 2013 - link
It's a software defined radio; so they'd only need to pay CDMA royalties on variants targeted at CDMA carriers. It's probably a year or two early to make LTE only products for any of them. VZW's planning to have it's national LTE rollout completed by the end of the year; but Sprint's not only farther behind, but outside of the cities their 3g coverage is often roaming on VZWs network and the latter isn't sharing LTE with anyone else yet. It's possible that Softbank's cash infusion will be spent significantly expanding Sprint's native network footprint; but if so I'd've expected them to have started talking doing so up already.DanNeely - Monday, January 7, 2013 - link
On farther thought, with MVNO's CDMA is likely to linger as a requirement for longer. VZW isn't letting it's MVNO have access to LTE at all yet; and AFAIK isn't letting Sprint MVNO's roam on anything except 2g for voice/sms usage; the latter will probably end up requiring CDMA support of some sort until VZW starts retiring it's legacy networks entirely to free up additional spectrum for LTE.blanarahul - Thursday, January 17, 2013 - link
Looks like NVIDIA is using "ARM Cortex-A15 Quad-core 28nm HPM Hard Macro" implementation of Cortex A15.arooj799 - Tuesday, February 19, 2013 - link
I just found this blog and have high hopes for it to continue. Keep up the great work, its hard to find good ones. I have added to my favorites. And the concept of Thank You.<a href="http://www.bestarticlewritingservice.net/">... article writing service</a>
pugster - Monday, January 7, 2013 - link
It is interesting that the 5th core is an a15 though.Auzo - Monday, January 7, 2013 - link
I think you almost get the best of both worlds (synchronous vs asynchronous) with this. If ever I have a single threaded compute heavy task, i can run it on the companion core without having to ramp up all 4 of the other cores. While at the same time not needing all the additional hardware to have 4 independent power sources for asynch.Auzo - Monday, January 7, 2013 - link
Hmm... I'm not sure if this was the case with the Tegra 3 but it looks like for the T4 you can enable and disable each of the 4 cores independently so a single threaded compute heavy task could run fine on one of the main cores with the remaining 3 disabled. So not sure if what I said above really comes into play then.Krysto - Monday, January 7, 2013 - link
I wonder why Nvidia didn't just go with big.Little. The choice is a little strange compared to big.Little's Cortex A7 core(s). I mean even if it's low-locked (I assume, like Tegra 3 one was), does it consume less power than 2-4 Cortex A7's running at 1-1.2 Ghz? Or maybe they optimized slightly more for performance instead of energy efficiency at the "low-end"?DanNeely - Monday, January 7, 2013 - link
I'm guessing T4 may have been too far into development to change when ARM announced big.little; and with nVidia's consistently slipping mobile timeline adding additional changes/delays would have been a bad thing. I suppose we'll find out what nvidia thinks about the concept more definitively when T5 is announced next year; but a 1:1 mapping between low and high performance cores seems like a waste of die space to me on a quadcore design since you only really need 4 cores for heavy loads to begin with.tipoo - Monday, January 7, 2013 - link
But then they could do that with any of the other four core. What's the point in a fifth low power core if it's still an A15? I guess it probably runs on lower power optimized silicon, but the power difference probably won't be as big as the older big.Little implementation with a much smaller fifth core.Exophase - Tuesday, January 8, 2013 - link
Almost every multicore ARM SoC out there, including quad cores like Tegra 3 and Exynos 4412, already lets you individually power gate each core. The only SoC I can think of that didn't have this feature was Tegra 2.If this works anything like it did on Tegra 3 you won't be able to run the 4 full frequency cores at the same time as the companion core, so there'd be nothing asynchronous about it.
The whole thing sounds weird to me, especially without any manufacturing differentiation for the fifth core this time (I hear that was expensive). Maybe a power optimized A15 is a lot different from a frequency optimized one at the same frequencies, I don't really know.. previous cases from ARM don't seem to be that incredibly dramatic..
mayankleoboy1 - Monday, January 7, 2013 - link
Whats the power usage and thermals ? I strongly suspect that T4 in a smartphone will run at lower frequencies to avoid major throttling.Ryan Smith - Monday, January 7, 2013 - link
All we know for sure right now is that Shield packs 38Wh of battery cells, which is on par with 10" tablets such as the iPad 4 and Nexus 10.Krysto - Monday, January 7, 2013 - link
Do we know anything about its API support? OpenGL ES 3.0? OpenCL? CUDA? Anything?I found it strange that he talked about "computational photography" but not even once mentioned OpenCL or CUDA. Nvidia is being a little odd not revealing much technical stuff about Tegra 4.
Ryan Smith - Monday, January 7, 2013 - link
The GPU is non-unified. This almost certainly excludes OpenCL; OpenGL ES 3.0 we still need confirmation on. However we've yet to see a non-unified OpenGL ES 3.0 (or OpenGL 3.0) GPU, as unification and that level of technology have previously gone hand-in-hand.Auzo - Monday, January 7, 2013 - link
the non-unified GPU is the single disappointment in an otherwise awesome SoC. Too bad.Krysto - Monday, January 7, 2013 - link
Yikes. Seriously? Still non-unified?I was pretty sure they were going to unify it now because they needed to do that! Everyone else is doing it, so it seems pretty insane not to do it.
But in the same time I knew from the beginning that if they change the architecture now, they will still need to change it again in 2014 - and that seems rather soon, to do it again after a year.
Why change again in 2014? Because I think the Tegra 5 might be part of Project Denver, with custom CPU design, and new GPU architecture, to make it all streamlined from smartphones to servers and super-computers.
But it seems changing it in 2 years was going to be too costly for them, so they decided to just skip the architectural change now, and just do the 2014 one with the 64 bit CPU's. But what does this mean now - that Tegra 4 won't have OpenGL ES 3.0? Or it won't be as efficient? This isn't very good news.
I also still don't think I like the idea of a single A15 core acting working in low-performance mode vs 4 Cortex A7 cores. That basically just sounds like some sort of Intel Turbo-Boost. But to me the simple chip base for low performance sounds like a better idea than using a complex single core chip for it. Hopefully we'll see some data when Anand reviews both Tegra 4 and the upcoming Exynos 5 Quad (which I assume is the one with 2 clusters of 4 A15 and 4 A7's).
tipoo - Monday, January 7, 2013 - link
Huh, I thought T4 was supposed to bring in Nvidias newer unified architectures from the desktop/laptop size. T5 or T4 Plus then? I'm a little dissapointed. SGX has had them for a while if I'm not mistaken. Can't wait for benchmarks though.Rockmandash12 - Monday, January 7, 2013 - link
They managed to make it larger than Tegra 3? I doubt it will be on phones any time soon.Dribble - Monday, January 7, 2013 - link
So the iPhone 5 with it's 97mm2 A6 can't exist then?powerarmour - Monday, January 7, 2013 - link
TouchéDeath666Angel - Monday, January 7, 2013 - link
Just as a fyi: A6 is on a 32nm process while T4 will be on 28nm.sosadsohappy - Monday, January 7, 2013 - link
Just as a fyi: A6 is a dual core. Tegra 4 is a quad core.lmcd - Wednesday, March 20, 2013 - link
Just as a fyi: Exynos 4 Quad was *INTAKE* a quad-core with great power and thermals at 32nm.tipoo - Monday, January 7, 2013 - link
Don't quote me on this but I thought I read it had 45% *lower* average power consumption than Tegra 3, which would be good for phones. I said the same thing when Tegra 3 launched, I thought it would be too power hungry for phones, but it managed.CeriseCogburn - Sunday, January 27, 2013 - link
It's the haterz theme and meme when they attack nVidia - they haven't gotten over one size fits their target 100% of the time.TareX - Monday, January 7, 2013 - link
After dealing with horrid battery life on my Tegra 3 One X, Tegra 4 can be slower for all I care but more power efficient.Cotita - Monday, January 7, 2013 - link
"As far as Shield goes, I wanted to correct one thing about how the PC display streaming works. The PC will stream to the display directly, not through Shield. Shield will pass controller commands to the PC. "This makes no sense. How will the PC stream video to the display?
During the demo you can see that the HDMI cable is connected to the Shield.
xcomvic - Monday, January 7, 2013 - link
$100.00 to compete with Ouya and Game stickcaught22 - Friday, January 11, 2013 - link
Anand, Brian, you say "It's 28nm HPL [...] This helps explain the 1.9GHz max frequency for the A15s in Tegra 4." This doesn't make much sense to me since HPL the slower than HPM. Why should HPL help to explain the 1.9GHz ... are you shure that it's really HPL?blanarahul - Thursday, January 17, 2013 - link
Actually, Cortex A15 is capable of clock speeds greater than 2 GHz (on HPm).mika2000 - Thursday, February 21, 2013 - link
I like the idea of the shield gaming device. It looks really awesome.<a href="http://www.phdthesis.biz/">thesis phd</a>