True enough and agreed. Not really that impressive.
More so considering nvidia's long history of misrepresenting power consumption and performance on upcoming parts. No doubt by the time we see this available and shipping there will be much better SOC options from the likes of Qualcomm etc.
Well said. In the mobile space nVidia has consistently fallen far short of their claims. Their CEO jus seems like such a blowhard with his long list of outrageous claims.
Yes, we are almost certainly going to see another Tegra 4 play out here. Far over-promise and massively under-deliver.
Tegra 4 has been an enormous bomb with sparse design wins to the point they threw most of what they had in a handheld Android gaming device that had to receive a $100 price cut before even hitting the market and it -STILL- is not selling with little interest in Shield, extending the failure of Tegra 4.
Tegra 4 has been crushed in the market by Samsung and Qualcomm and by the time we even see this part actually -available *AND* shipping- it will be already be old news like Tegra 4 was. Never mind that nvidia is getting their butt kicked in design wins for their SOCs by the Qualcomm monolith. Likely due to the failure and issues with Tegra 3 for units such as the Nexus which ditched nvidia for Qualcomm.
Tegra 4 is in nothing but a crappy over-heating tablet and a failed handheld gaming device sold by nvidia themselves. I expect the same for this part if they again can't manage to deliver on their promises -AS WELL- as deliver on time. Two things they have yet to manage...
Throughout its history, Nvidia has generally operated like a locomotive. Whenever they have entered a new sector they make a splash by making promises, and in the first couple generations they come up short of their promises. But with successive generations they've gained ground, and executed more consistently than their competitors, until they are among the leaders of the sector. It happened with desktop 3d graphics and with mainboard chipsets. There is stiff competition in the mobile SOC space, but I wouldn't be so quick to dismiss them there, either.
Yet by your admission there is every reason to dismiss them until they catch up. The problem is that they are competing against three companies that have consistently executed for years now; PowerVR, Qualcomm, ARM, and Apple. They have to consistently outperform Apple and PowerVR if they wish to gain Apple's business, they have to consistently outperform ARM and PowerVR to gain Qualcomm's business, and they have done neither yet.
My post was a reply to kukarachee's post. He is dismissing the announced generation out-of-hand because of their past relative performance in the sector. The important distinction here is that kukarachee is attempting to make a prediction about a future iteration based solely on Nvidia's previous results in this sector. My argument is that there is NOT every reason to dismiss Nvidia, because they have a track record of succeeding at playing that same game of catch up. I will admit that it's two conflicting track records, and one must decide which one applies in this instance, but I will certainly not admit that there is every reason to dismiss them until they catch up. Whichever track record you find more appropriate to pay attention to is your own choice, however.
There's no indication they've caught up yet, however. The PowerVR 6 was demonstrated at GDC, earlier this April, and is expected to be out shortly. Logan won't be out until next year, which means by definition it is going to be behind.
I have to disagree. I am not sure about your analysis that anything that comes out later is "by definition behind." Rogue is not promising 400 GFLOPS, is it? And it's expected to be out in Q4 2013 isn't it? What we know for sure is that Logan leapfrogs Rogue in terms of API compliance. Suppose Logan devices come out one year later than Rogue devices, and Logan performs upwards of 50% better than Rogue. Would that not be considered being "caught up?" And I never even said that with Logan, Nvidia would be "caught up." I said that Nvidia has a history of catching up. So my claim is that the advantage PowerVR holds over Nvidia immediately after Rogue and Logan, taking into account adjustments made for when the product is first made available to the market, will be less than the advantage that PowerVR has held over Nvidia for the duration of the previous generation, with similar adjustments made. I further claim that there is a good chance that eventually (0, 1, 2 generations?) that gap will be negative. My argument here has nothing to do with Logan being better than Rogue. It is a refutation of kukurachee's dismissal of Nvidia's claims for Logan's performance that he based on the GPUs that Nvidia had on their previous generations SOCs, and the amount of hype/interest they tried to generate for these SOCs.
Yes, actually, Rogue is promising 400 GFLOPS, it promises OpenGL ES 3*/2/1.1, OpenGL 3.x/4.x, and full WHQL-compliant DirectX9.3/10, with certain family members extending their capabilities to DirectX11.1 functionality.
For Logan to be upwards of 50% better than Rogue it would have to be a 600 GFLOP chip since PowerVR 6 is expected to hit 400 GFLOP. What would not be caught up is if Logan was a 400 GFLOP chip released next year. You see, Rogue is intended to hit 1 TF, possibly in full powered tablet form factors, so for Logan to truly best Rogue it would need to hit 1.5TF in a tablet form factor.
PowerVR Series6 GPUs can deliver 20x or more of the performance of current generation GPU cores targeting comparable markets. => iPad currently is about 70GF, so a Series 6 implementation would be 1.4 TF
PowerVR Series6 GPU cores are designed to offer computing performance exceeding 100GFLOPS (gigaFLOPS) and reaching the TFLOPS (teraFLOPS) range => 100 GF is the bottom of the expected range
My point holds though that the Series 6 is designed to go over 1TF in performance, which is more than enough to match NVIDIA's Logan for the foreseeable future.
You do realize IMG.L exists in phones because they couldn't cut it vs. AMD/NV in discrete gpus right? They have been relegated to cheapo devices by AMD/NV and are broke compared to NV. Apple apparently doesn't give them much profits. They had to borrow money just to buy 100mil mips cpus (borrowed 20mil), meanwhile NV buys companies like icera for 330mil cash, July29th bought Portland Group (compiler teams just got better at NV), with I'm assuming CASH again as they have 3.75B in the bank: http://blogs.nvidia.com/blog/2013/07/29/portland/ I'm guessing this hurts OpenCL some also, while boosting NV in HPC even more. PGI owns 10% of compilers, while Intel owns ~25%. So NV just gained a huge leg up here with this deal.
Back to socs, none of the competing socs have had to be GOOD at GAMES until now and not even really yet as we're just getting unreal 3 games coming and announced. How do you think this plays out vs. a team with 20yrs of gaming development work on drivers & with the devs? Every game dev knows AMD/NV hardware inside out for games. That can't be said of any SOC maker. All of those devs have created games on hardware that is about to come to socs next year (no new work needed, they already know kepler). We are leaving the era of minecraft & other crap and entering unreal 3/unreal 4 on socs. Good luck to the competition. If AMD can survive long enough to get their soc out they may be a huge player eventually also, but currently I'd bet on NV doing some major damage with T5/T6 etc (if not T4/T4i). T4 is just to hold people off until the real war starts next year as games are just gearing up on android, and T4i will finally get them into phones in greater numbers (adding more devices in the wild based on a very good gpu set).
That being said, T4 is already roughly equal to all others. Only S800 looks to beat its gpu (I don't count apple, sales dropping and not housing anything but apple hardware anyway). I don't see rogue making any speeches about perf, just features and I don't see them running unreal 4 demos :) I don't hear anything from Mali either. S800 appears to have a good gpu (330) but it remains to be seen if games will fully optimize for it. I see no tegrazone type optimizations so far on Adreno. There is no Adrenozone (whatever, you get the point). All soc makers are about to be seriously upset by gaming becoming #1 on them. Before they needed a good 2D gui etc, not much more to run stupid stuff like minecraft or tetris type junk. We'll see how they all fare in stuff like Hawken etc types. I'd bet on NV/AMD with NV obviously being in the driver seat here, with cash no debt etc helping them and already on T5 by the time AMD ships A1 or whatever they call it.
Soc venders better get their gaming chops up to snuff in a hurry or the NV GPU train will run them over in the next 3yrs. NV gpus will be optimized for again and again on desktop and that tech will creep into socs a year or two later over and over. All games made for PC's will start to creep to android on the very same hardware they were made for already a few years before on desktops. Devs will start to make advanced games on HTML5 (html6 etc eventually), OpenGL, WebGL, OpenCL, Java etc to ease portability making directx less needed. At that point Intel/MS are both in trouble (we're already heading there). IF you aim a game at directx you have a much harder time porting to everywhere else. That same game made on OpenGL ports easily (same with Html5 etc) to all devices.
In a few short years we'll be looking at a 500w normal PC type box with Denver or whatever in it (Cortex A57 I'd guess) with an NV discrete card for graphics. With another 3yrs of games under the android belts it should make for a pretty good home PC with no WINTEL at all in it. Google just needs to get an office package for home out that does most of what office does by that time and there is no need for windows/office right (something aimed more at home users)? They need to court Adobe now to port it to android before Denver or Boulders launches (and anyone else aiming at desktops) or come up with their own content creation suite I guess. A lot of content comes from Adobe's suite. You need this on android with an office package also to convert many home users and work pc's. I see horrible stock prices for Intel/MS in less than 5yrs if google courts Adobe and packages a nice office product for home users. They can give away the OS/Office until MS/Intel bleed to death. All they need is games, adobe, NV/AMD gpus (choose either or both) in a tower with any soc vendor. They will make money on ads not the hardware or software MS/Intel need to make profits on. Margins will tank for these two (witness MS's drop on RT already, and Intel's profits tanking), and android will come to the front as a decent alternative to WINTEL. It’s kind of funny Google/(insert soc vendor name here) are about to turn MS into netscape. They gave IE away until netscape bled to death. The same is about to happen to them…ROFL. At the very least Wintel won’t be the same in under 5yrs. While Intel now has android, it would be unwise of Google to help Intel push them around (like samsung does to some degree). Much better to make a soc vender be the next Intel but much weaker. Google can push a soc vender around, not a samsung or Intel (not as easily anyway).
I’ll remind you that PowerVR has already been killed once by NV (and ATI at the time). That's why they are in phones not PC's :) Don't forget that. I don't see this playing out different this time either. Phones/tablets etc are accidentally moving into NV/AMD territory (gaming) now and with huge volumes making it a no brainer to compete here for these two. It's like the entire market is coming to AMD/NV's doorstep like never before. Like I said, good luck to Qcom/Imagination/Arm/Samsung trying to get good at gaming overnight. There is no Qcom "gaming evolved" or "The way it's meant to be played" games yet. NV/AMD have had 20yrs of doing it, and devs have the same amount of time in experience with their hardware (which with die shrinks will just slide into phones/tablets shortly with less cores, but it’s the same tech!). If Qcom/Samsung (or even Apple) don't buy AMD soon they are stupid. It is the best defensive move they can make vs the NV gaming juggernaut coming and if that happens I'll likely sell my NV stock shortly after...LOL. Apple should do this to keep android on NV from becoming the dominate gaming platform. MS could buy them also, but I don't think Intel can get away with it quite yet (though they should be able to, since ARM is about to become their top cpu competitor). When an ARM soc (or whatever) gets into a 500w PC like box Intel can make that move to buy AMD for gpus. Until then I don't think they can without the FTC saying "umm, what?". I'm also not sure Samsung can legally buy AMD (security reasons for a non American company having that tech?).
Anyway, it's not just about perf, it's also about optimizations (such as NV buying PGI for even MORE cuda prowess for dev tools). IE - Cuda just got a boost. More prep work for Denver/Boulder is afoot :) Just one more piece of the puzzle so to speak (the software stack). For google, they don't care who wins the soc race, as long as it pushes Android/Chrome as the Wintel replacement in the future for the dominant gaming platform and eventually as a PC replacement also for at least some work (assuming they court Adobe and maybe a few others for content creation on android/chrome or whatever). I’m guessing Denver/Boulder (and their competition) are pipelined to run at 3.5-4ghz in 70-100w envelopes. It would make no sense to compete with Intel’s Haswell/Broadwell etc in 8w. A desktop chip needs to operate at ~70w etc just like AMD/Intel to make the race even. You have a PSU, heatsink/fans and a HUGE box in a PC, it would be stupid not to crank power up to use them all. A Cortex-A57 chip at 4ghz should be very interesting with say, a Volta discrete card in the box :) A free OS/Office suite on it, with say $100 A57 4ghz chip should be easy to sell. A soc chip for notebooks, and a stripped gpu-less version for desktops with discrete cards sounds great. I like the future.
$10.0 even!! A very good explanation of your theory. not in the mainstream yet, and Intel (and Anandtech) keep on pushing how great its latest and great chip is that you can get for only $500. This monopoly is going to crash and burn. MS, as you said tried to go the ARM way (the system just wont support monopoly profits for two monopolies, maybe one but not two) but didn't execute well
Denver won't necessarily be ported to the PC form factor, but who knows. It will surely change the landscape in the smartphone/tablet segments. I also think IMG, Mali and Adreno can learn to compete in games. Yes, they would not have had 20 years, but they do have a lot of momentum and the sheer number of devices running these platforms will push developers to code for them.
Who says they are measuring it against the iPad 4(which is 80 gigaflops btw)?
Their wording is so vague that they can choose a midrange model and say that's it's more representative of the GPU performance of most GPU's(at the time that statement was made), which would be correct.
"Do Intel and PowerVR have similar tricks up their sleeve?". Not Intel but PVR likely with Rouge. The key competitor here is Qualcomm with their Adreno 330 powerhouse. That has time to evolve to better efficiency and tuning. But competition on the mobile side is heating up with this announcement and potentially Nvidia's shilf to quickly make this part available. Bigger tablets and UltraLight notebooks can benefit from it rather than settle with Intel's junk iGP. The disappointment of the market is no real Tegra4 and Tegra4i products in the market after such delays while Qualcomm churns out model after model of products for OEMs.
huh? I really dont know what you people are talking about. Tegra 4s closest competition is the snapdragon 800 which is almost nonexistent in the market. Tegra 4 is beating the SD800 in almost every way. Including design wins!!!
"While some will argue that Nvidia's design wins are on the weak side, they have more announced design wins than their competitor, Qualcomm with the Snapdragon 800" http://www.brightsideofnews.com/news/2013/7/23/dou...
so while you guys might always have negative views towards nvidia, its not nearly as bad as you guys try to make it out to be
If we won't see a powervr to "necessarily" beat this why discuss it? They don't exist until...umm...They EXIST. :) If I can't BUY it, it doesn't exist :)
I have a dream in my head of a 50000 gpu core soc that is less than .1w and 10000Tflops gpu power. Do I now have the fastest core on the planet? No, it doesn't EXIST until you can BUY it. Your first sentence really made me laugh. The fact is they don't exist. I'll give you powervr chips that beat this don't exist YET, but that's still just a MAYBE they will...ONE DAY. They'll face bankruptcy if nobody goes for mips tech. They clearly don't make enough from gpus on every apple device to survive the next few gens. TI exited due to not buying icera (or any modem), and Imagination may exit due to mips meaning nothing (could be wrong, we'll see - deving for mips is a tough sell I'd think vs arm/x86). I suspect they'll be bought or die in under 5yrs. They have a market cap of 1B if memory serves and make about 30mil/yr (only 2012, less previously, we'll see for 2013). Good luck keeping up with everyone else making 12x or far MORE than that in the same business. You're dead or bought. Apple could by them for the equivalent of a song (not sure why they haven't) but maybe they're making their own to dump them eventually. They bought PA Semi for the inhouse cpu. Do they have a gpu coming? Otherwise why not purchase IMG.L? They are already in everything you make. Odd, whatever.
The process advantage is disappearing quickly for Intel. Shortly Intel will have to out CHIP the competition, not out fab them. It's a new ballgame from here on out for Intel and the competition's profits blows Intel's away (IE Samsung making almost Intel's yearly profits in a Q). You can't outspend the competition on fabs when they make 3.5x what you do per year and can spend it all on FAB R&D until you go broke. Intel raised 6B in bonds to fund a buyback. It's a sign on weakness IMHO. They will have to cut dividends soon also (last I checked they can't fund them for more than a few years and even less with dropping profits). Again a sign of weakness.
Let me know when powervr (in something other than Apple products) beats a T4, let alone T5. I can't see a product yet :) Anything COULD beat NV. But until something DOES in a PRODUCT who cares? I'll need to see shipping product doing it first. Just like people ranting on lack of T4 wins (well duh, it just came out), I don't see S800 announcements right and left either. Why? It's NOT out either (not shipping in enough volume for massive announcements anyway, not yet). T4 just started shipping in July to ODM's.
I can predict everything will beat a T4 in 5 yrs...LOL. But none of it is shipping NOW. I can buy a toshiba tablet with T4 now (and HP also), though I wouldn't want the toshiba until quality is better. They appear to have design problems with their tablet. But it's shipping now. Qcom will beat this, but WHEN? And how do you know? S800 won't do it vs. T5 (possibly T4 but we don't know in what or if at all yet). Have you seen benchmarks for the next rev past S800? You are special. You work for Qcom or something and have a T5 in your hands? I'll reserve anything regarding AMD for when they actually SHIP a soc.
Also note Intel has hit 14nm snags recently and is already delaying chips. I agree with your PowerVR needs something for reference out now though ;) Where is this stuff? Are they that far away from shipping something? No xmas for them then, nor S800 if it doesn't get out the door in volume soon. We see T4 announcements from Toshiba, HP (both have T4's for sale), Asus, etc so they will be in xmas stuff for sure. How many S800 devices will make it for xmas? Are they shipping to ODM's yet? I know xperia Z is supposedly coming Sept so it has to be shipping but if anyone has some links to verify that...Then again when they ship devices with 3 different chips they can claim shipping but not really ship your desired chip for months. Exynos 5420 is ramping in Aug this month, but I guess we'll have to wait for Note3 to see T628 MP6 perf. They are claiming it's 2x faster than 5410 PowerVR544MP3. But I'll wait for benchmarks of a T628 before I believe this and battery life to go with that benchmark. How long before it is in a product? Note3 is expected Sept, but it has a long list of supposed chips, so not sure which has the S800 or 5420 or when these come specifically (or if I even have the correct chips, I saw a claim of S600 for one model - who knows at this point). I wish companies would just ship ONE soc per product or name them something different like note3.1 for s800, note 3.2 for 5420 etc etc. Something to differentiate better for customers who don't read every review on the planet. Would the REAL note3 please stand up...LOL. Granted its really about the included modem for specific regions but it's still confusing for a lot of people.
Intel is moving to their own soc gpus. I'll believe they beat people when I see it in anything gpu :) NV had samples of 20nm socs ages ago, so Intel will need 14nm to do any damage here and broadwell is slipping to 2015. When we will see a 14nm soc if they are having problems? All socs will be 20nm next year, so Intel's 22nm will be facing these (T5 etc) not far after silvermont/valleyview/baytrail release (tablet by xmas, phone next year?). All have sped up 14nm plans, so they won't be too far behind a DELAYED intel 14nm process especially samsung swimming in billions to dump into their fab tech. T6 is supposed to be 16nm finfet early 2015, so not much of a process advantage for Intel at 14nm then and T6 has maxwell in it. Intel better have a great gpu and this won't be a basic arm port for cpu either it is in house DENVER :)
http://www.digitimes.com/news/a20130418VL200.html So 20nm risk production started 2013 Q1, and 16nm risk starts by end of year. "Chang indicated that TSMC already moved its 20nm process to risk production in the first quarter of 2013. As for 16nm FinFET, the node will be ready for risk production by the year-end, Chang said."
So depending on who T6 comes from it's either 16nm (tsmc, it's at least this) or 14nm (sammy?)? Either way Intel faces stiff competition from here out. The fab party is over. Time to make some unbeatable chips or pay the price. You won't outFAB the competition any more. Intel is about to release 22nm tablets/phones and everyone else is about to do 20nm shortly after that. Not sure how that's a real victory. I don't see one at 14nm either. We may actually see Intel get beaten to 10nm if samsung keeps pounding out 7.9B+ in profits per Q (Q1 was 7.9B, 8.3B i think for Q2). This kind of profit can kill Intel's fab lead quickly. Unless they seriously boost profits samsung has 10nm in the bag and that's assuming Intel wins to 14nm which has yet to be proven and is showing problems or broadwell wouldn't be delayed right? Will apple be fabbing for people shortly? They certainly have the cash to build 3-4 fabs today easily and laugh in 3yrs. I'm surprised it hasn't already happened (though they did buy one, why not start some new ones while you're rich?). Well rumor is they bought UMC, but it's probably true. I'd be doing that and building 3 450mm fabs for the future (what's that 20bil for 3 of them? Even at 25-30bil who cares, apple has ~130). Buy IMG.L and start pumping out gpus in one fab, socs in another and memory in a 3rd for all your devices in 3yrs. Samsung makes as much as they do on a device because 60+% of a phone/tablet comes from IN HOUSE.
As far as I can see this xmas for tablets is owned by S800/T4. Xmas Phones are owned by S600 I think, with maybe enough volume for S800 to make it into some things. T4i is Q1 and will miss xmas, and I don't think 5420 will make it into anything volume wise for xmas as it's just ramping this month.
T5 will be out before July 2014 on 20nm (so will a 20nm S800 etc). Intel won't be seeing 14nm until 2015 in volume if they crack 2014 at all. I don't call that SOON after T5. Unless you want to say VERY soon after Intel puts out a 14nm soc it will face T6 etc.
Here it is, Christmas time, and your predictions are a little off. There are many S800 SoCs in phones, many S600 SoCs in phones, nearly all the good tablets have S800 SoCs, and no Tegra4 to be seen.
If you can't even predict the next 4 months (July-Dec), why should we listen to your predictions for the next 3 years? :)
Tegra4 is a flop. Tegra4i is an admission by nVidia that Cortex-A15 can't cut it in phones. And Tegra5 won't be much better (if we even see it in anything by this time next year).
I don't know why people are picking on your comment - faster than A6? I'd hope so, given A6 is form 2012, and this is arriving in 2014. The question is whether it'll be faster than A8, not something from 18 months ago.
Wow. Even if we account for a new iPad delivering 2x the performance of the last one (unlikely?) Keller is still ahead, potentially in both power and performance. This is crazy!
Except it will be first among equals. PowerVR 6 is designed to scale up to 1TF, and realistically ship in 200GF to 600GF configurations, making the Logan nothing special.
And you think Kepler isn't? Kepler is scalable as far as Nvidia's highest end chips. Plus that argument is a pretty weak one, since what matters is how much can you scale under a certain power envelope - not how much in TOTAL. That's completely irrelevant. It just means Series 6 will - eventually (years from now) get to 1 TF. But so will other chips.
Also Kepler IS something special. Here's why - full OpenGL 4.4 support. Imagination, Qualcomm and ARM have barely gotten around to implement OpenGL ES 3.0, and even that took them until now basically, to implement the proper drivers for them, and obtain the certification.
The point is, that the same performance level, Kepler optimized games should look a lot better than games on any other chip.
Oh, and even Intel barely got around to implement OpenGL 4.0 in Haswell - the PC chip. So don't expect the others to support the full and latest OpenGL 4.4 anytime soon.
Incase you don't know this already, sandy/ivy/haswell integrated graphics are absolutely terrible, in most case when used aside your dedicated GPU it lowers performance instead of increasing it. On its own it might be fine for a few things here and there, but even that is terrible.
Your GPU knowledge is laughable. Integrated GPUs are disabled when running a dedicated. You obviously are a troll or are plain ignroant about all CPU/GPU issues. others please ignore this user's posts...
Actually, you're wrong. In Nvidia Optimus/AMD Enduro the discrete GPU draws in the integrated graphic's frame buffer even when in discrete mode. Also, on desktops it is possible to enable your integrated graphics and discrete GPU at the same time to support more monitors.
Difference is only present on Enduro. Optimus is almost identical. The performance hit with multiple monitors on multiple devices is likely a Windows thing and a framebuffer sync thing. Not an actual problem with Intel graphics.
So I know we know Exar is wrong, but his point that Ang's information is irrelevant is in fact correct, for this mobile situation anyway.
If you take a look at Microsoft's Surface Pro's benchmark number, you'd be shocked by how many times faster its GPU is compared to latest iPad and Androids phones. Because I was!
I'm just dismissing the guy who's saying that integrated graphics from intel is absolutely terrible. It's certainly not terrible compared to the GPU in top mobile SoC.
Good sir, android (90% of what this will be running I would assume) JUST NOW with 4.3 which is yet to appear on a device (Nexus 7 R2 is first) just now implemented OpenGL ES 3.0 support.
Look, being the first to support new API's is great, but being the only one gains you nothing because nobody is going to program for the 1% that's just bad business.
Even with Keplar's advanced features support, we'll have to see how much effort developers put into optimizing games for Keplar. Most mobile games are developed first for iOS and then ported over to other platforms.
EA just announced that the iOS App Store is EA's biggest retail distribution channel, bigger than other mobile app stores, their own Origin and other PC and console channels. So games being designed for and optimized for iOS/PowerVR GPUs are a given, because that's where the market is. nVidia will have to actively convince mobile developers to support their additional features.
What's more, I don't believe any mobile OS officially supports any version of desktop OpenGL much less OpenGL 4.4. Android 4.3 just announced incorporation of OpenGL ES 3.0. Rather than OpenGL 4.4, the most common implementations will likely have Keplar additional features exposed as a bunch of OpenGL ES 3.0 extensions, which may also limit adoption.
I'd just point out that hard core gamers are really a niche market. World wide sales of the Xbox 360 and the PS4 over 8 years is about 70 million each. Not exactly huge.
I am in agreement. Features are just as much a part of the conversation as power and performance. The software side should also be good as Nvidia has done a good job with this. Software/drivers being my biggest gripe for Intel.
This is definitely drumming up business for their IP licensing. Squarely aimed at Apple. Intel should take note also because Atom's graphics have always sucked.
So this is big and good news for customers. Thinking smartphone only is too small.
My question is does this do well power-wise for video. We will have to wait and see. I'm glad to see Nvidia hanging around when the idea of a GPU only company hanging around this long was pretty pessimistic. They have been good at redefining themselves, which is very tough for companies. (cough, cough Kodak and HP with Palm)
Their support may be great, but nobody is going to program for Keplar features if they don't hold a solid market share, and I mean VERY solid, mirroring Apple iPhones. I use them as an example because while Androids hold most of the market, they are a mixed back of fragmentation.
Also, the new Atoms coming out have almost nothing in common with the old Atoms. People keep making references to the old ones, which I agree were worthless even for a netbook, but the new ones I actually have some high hopes for.
Correct, I don't think Kepler is all that special. Nvidia has lost a lot of credibility in the mobile space by being unable to take the crown for three years straight. In any case, if you're going to be quoting Kepler PR, this is PowerVR's list of accomplishments for the 6 series GPU:
Delivering the best performance in both GFLOPS/mm2 and GFLOPS/mW, PowerVR Series6 GPUs can deliver 20x or more of the performance of current generation GPU cores targeting comparable markets. This is enabled by an architecture that is around 5x more efficient than previous generations. => At the same power level expect 5x the perf, approximately 350GF
All members of the Series6 family support all features of the latest graphics APIs including OpenGL ES 'Halti'*, OpenGL 3.x/4.x, OpenCL 1.x and DirectX10 with certain family members extending their capabilities to full WHQL-compliant DirectX11.1 functionality. => Just like Logan
PowerVR Series6 GPU cores are available for licensing now. => Posted last January! Expect then to see this in SoC this year, not years from now.
Whaddya mean? Kepler doesn't scale to those either. It only clockspeed scales and is probably close to headroom, while adding another core will make the die WAY too big.
No, you're wrong. Also, ES 3.0 is pretty damn close to 4.2 compliance. So don't act like it's a huge jump.
Finally, don't forget AMD's GCN is competing in this space and has proven itself very relevant.
Nvidia haven't even implemented OpenGL ES 3.0 on anything that ships or is near to shipping! And now they are going completely to the other end of the spectrum and doing a fully DX11 compliant chip!? Oh and the reason there is no OpenGL 4.4 on other mobile GPU's is because there is no point in doing it. Why burn area for an API you cannot use.
The expectation is that this year is Apple's "S" refresh which last time with the iPad 2/iPhone 4S brought 9x (iPad 2)/7x (iPhone 4S) claims of GPU performance increases over the previous generation by Apple which I believed translated into ~5x real world performance improvements. As such a 5x theoretical GPU performance increase by nVidia in 2014 is not out of line with what Apple could be delivering later in 2013. As michael2k points out, PowerVR 6 Rogue is certainly scalable to those performance levels. We'll have to see them implemented in actual devices to see how well they compare within realistic power constraints to really compare their effectiveness of course.
I also think the end result will be like what we have with AMD. AMD offers very good hardware, but nvidia pushes more on api/driver lvl. So if PowerVR and Kepler become comparable I wonder what kind of competition we will see in Driver/API/Game Engine front.
Apple actually writes their own drivers for their PowerVR GPUs which seem to be very efficient considering the iPhone 5's SGX543MP3 actually achieves double the triangle throughput of the Galaxy S4's SGX544MP3 despite the Galaxy S4 having higher clock speed and memory bandwidth. So there is the comparison between PowerVR reference drivers which is probably what most device manufacturers are using vs Keplar and Apple PowerVR drivers vs Keplar. It's a safe bet Apple will continue to make gaming a focus on iOS given that's a major part of the App Store, so they'll continue putting effort into GPU driver performance optimization. nVidia of course has a long track record with driver optimization so we'll definitely see lots of competition in this area.
In terms of features, Apple continues expose new features in PowerVR GPUs through OpenGL ES extensions. They've already implemented or are going to implement in iOS 7 a number OpenGL ES 3.0 features on existing Series 5/5XT GPUs including sync objects, instanced rendering, and additional texture formats, etc. The major untapped feature is multiple render target support which should be coming since the EXT extension has now been finalized. Series6/Rogue is DX10 compliant, so I expect we'll be seeing those additional features like geometry shaders exposed through OpenGL ES 3.0 extensions. So it'll be a comparison between OpenGL 3.3 vs OpenGL 4.4 exposed through OpenGL ES 3.0 extensions.
One advantage Apple does have is that they can regularly release performance optimizations and new features in regular iOS updates which see rapid adoption throughout the userbase so developers can count on the features and use them. nVidia likely has more trouble pushing out new drivers broadly on Android. As such, it's good that nVidia is aiming high to begin with in terms of performance and features since they have less opportunity to gradually increase them over time through driver updates.
You also have to remember that most of Apple's customers (like most customers of all phones and tablets) don't give a damn about the most demanding games. Apple is happy to help out game developers, happy even to boast about them occasionally, but games are not central to what Apple cares about in a GPU.
What Apple DOES care about is enhancing the entire UI. This means that (especially as they get more control over their entire SOC) they're going to be doing more and more things that will be invisible if you just look at specs and traditional benchmarks. For example, it means that they will pick and choose whatever features are valuable in Open GL4.4 and integrate those into their devices (maybe, maybe not, exposing API).
In this context, for example, the shared buffers of Open GL4.4 would be extremely valuable to Layer Manager, and they have enough control over the CPU (and I assume could negotiate enough control over the GPU) to implement what's necessary to get this working on their systems. It could be there, invisible to external programmers, but making Layer Manager operations (ie the entire damn UI) run 20% faster and with 20% less energy.
A second example. Apple, on both OSX and iOS, have constantly tried to push image processing ever closer to what is "theoretically" correct rather than computationally easy. So they would like all image processing to be done in something like Lab space, and only converted to gamma corrected RGB at the very stage of display. They do this on OSX, on iOS with less CPU power they do as much as they can in SRGB space. But if you had HW that ran the transformations from RGB (with various profiles) or YUV (again with various profiles) to Lab and back, you could expand this for usage everywhere. It's the kind of small thing (like kerning or ligatures) that many people won't notice, but it will make all images (and especially manipulated/processed images) on iOS look just that much better --- but to really pull it off requires dedicated HW.
Point is: I am sure Apple will continue to throw more transistors at the GPU part of their SOC. But they won't necessarily be throwing those transistors at the same things nVidia is throwing them at. They may even seem to lag nVidia in benchmarks, but it would be foolish to map that onto assuming the devices feel slower than nVidia devices.
I think your opinion about Apple's position on gaming is 4 years old. 75% of App store revenue come from games. iPhone 3GS, iPhone 4S, iPhone 5, iPads all featuring GPU, not necessarily CPU, that were well ahead of competitions when it came out. So Apple doesn't just CARE about games, they effing love games/money! It didn't start out this way of course, but they learn that's what their customers wanted through app downloads and sale, so that's what they are focusing now.
Well I can't blame them. Most computer games are written and optimized in DirectX. With the mac market still a small fraction of of PC, it's hard to see developers change their DirectX stance. Thus it's hard to see Apple change their stance on computer games as well. It's more likely for them to make a console based on iOS then push for Mac games.
The app store doesn't make a ton of money, neither does google's currently. Revenue and profit are two different things also. They are both really just designed to push the platforms and devices. Maybe they will be huge money makers in the end as games get more evolved and rise in price, but I doubt Apple makes 1B on the store vs something like 39B from the hardware. In 2011 Piper Jaffray said they made ~$239 mil but apple divulges nothing so no real proof of what they pay in costs to run the store or what they make after that. Apple has only claimed to run at break even (oppenheimer 2010). If numbers are correct, they make ~$3bil in revenue on downloads but how much is left for profits after server costs, maintaining them, bandwidth etc.
Apple cares about games when they start OPTIMIZING for their products. Currently they are inspiring nobody to do this. I see nobody but Nvidia doing anything to make things better looking on their hardware. The fact that apple's store has 75% of revenue from games just says game devs own most of the revenue from the store, and that consoles will suffer because many game elsewhere (which shows in wiiu/vita sales). It says nothing about APPLE themselves promoting games development.
Links to Apple proof of game investing please. So far all I see is "you should be thankful we let you put your game on IOS" rather than "Here' please make it BETTER on IOS & A6 and we'll help you". With NV they send help to the game dev to optimize for their hardware (literally send people to help out). Check out Ouya comments etc. If apple is doing this I'm not aware of it and have seen no articles from any dev saying "apple was great to work with, gave tons of help to optimize for A6" etc...AMD/NV have courted game devs forever (ok, 20yrs). I have seen nothing from Apple and devs didn't even jump on macs conversions until they hit 10% share (which is dropping now, so I expect it to drop again for devs). Apple appears to be learning nothing. If they had made games a HUGE part of their priority 3yrs ago android would have never taken off. They played the enterprise card properly but so far they have wasted the game card. Enterprise brought down Rimm (everyone gaining exchange), but only google seems to be helping to court devs (and NV helping). Shield wasn't on display in main view for all at Google IO for nothing. Google seems to understand gaming is needed to kill windows, take over some PC sales and kill directx. All of these are done with games on android. Couple that strategy with a free OS and Free office package at some point for home users (in a decent package) and WINTEL isn't needed. Qcom/NV/Samsung will provide the soc power and they google provides the platform to run on them. All of us win in games if we get off directx. Devs make more money because of easy porting thus having more money for risk on better games since they have so many to target. Any decent game should make money just due to the sheer # of devices to sell to at that point.
This is ALL relative, does anyone realize that this will not be released till 2014? ......2014? the A8/A8X (or equivalent) will be released by then, as well as Qualcomm's latest snapdragon processors. Don't place your expectations too high.
I mean that performance currently hasn't scaled with the power, i'm assuming that the current 720m has a power envelope of around 35W and yet there is a ~5W kepler GPU that has around the same performance. If they could scale it up while keeping the efficiency it would be a sight to behold.
For the same architecture, it is true that performance scales linearly with clock speed. However it is not true that power consumption scales linearly with clock speed. 5 W at 500 MHz does not equate to 25 W at 2.5 GHz.
This mostly looks very rosy for Nvidia's future IP in mobile space. If Logan comes at 28nm though, it *does* face the same issue as Tegra 3 had — unless it comes to market not in 2014 but in 2013.
As a rough estimate, at 28nm and from the power shown, to achieve max performance they need 25w on the gpu power meter. That's a bit of a shame, because that is clearly outside the bounds of minimal cooling.
On the plus side, that is a tonnage of power, which companies can tune to use as much or little of as they want, all the way up into the absurd levels — rather like the PowerVR Rogue. In that sense it looks good for NVidia.
What? If they can get 76GFLOPs/s in less than 1W they do not need 25W for 5x the performance...
And the remark about 28m: Every company will supply constraint with 20nm in 2014. And Tegra 3 was a huge success for nVidia (50% revenue increase in FY2013) that they can sell it for a lower price (cheaper wafer) and in huge numbers (more wafers, better yields). And do not forget: 20nm will only bring a 30% lower power consumption. So if you can archive this with an optimize architectures there is no benefit for using the 20nm from the start for Tegra.
Yes indeed. This looks to be another SOC that will tank just like Tegra 4 has totally crashed and burned (literally ;) ) with little success in the market. Tegra 4 was a flop, with power consumption and heat output so poor it needs a fan to keep it cool in some cases.
Nvidia should stop burning money with the continued Tegra failures.
This also ignores the 1000lb gorilla that is Intel wading into the SOC space, who no one, not even the big players like Qualcomm or Samsung, much less a small-fry like nvidia, will have much success competing against once they get rolling with their SOCs built using their exclusive and superior foundry technology.
Apple would be considered a competitor and probably won't be accepted, I believe, unless Intel is giving access to old fabs. Read the official information before you post.
Intel will probably never make ARM SoCs since they left the business.
Next year 3 Ghz 20nm ARM chips are rumored to arrive. Intel can't compete with that with Atom. Plus Atom's GPU is still no match for the average ARM GPU, let alone for Kepler.
I definitely agree Logan needs to be 20nm next year, and I'm not sure Nvidia will do that. I do know they intend to have Tegra 6 after it, at 16nm FinFET.
Quite true, of course. Unless they're planning to foist CUDA upon the mobile world it's not much of a draw; especially not in what will be a rather weak implementation of their compute architecture.
You're probably right about DX running on WP8 but I presume they can run OGL also, and developers would use OGL as it would make some code cross-platform. I'd bet that DX is a rather tiny market relative to OGL on mobile.
I can't possibly see WP8.x running DX, unless you are referring to the surface pro.
I mean the option is there, but lone API's don't get developer attention, unless my limited knowledge is misleading me, that would make porting things to WP from iOS/Android more work than it could possibly be worth.
Much the same as the Cell processor in PS3, never got used fully because it was a bear to code for.
They compare it to iPad but this could very well be in the next iPad down the road. Not as Logan but as licensed IP. Apple could have licensed this a year ago and the world wouldnt have been any wiser with NDAs desipite the recent announcement. They build their own chips and license PowerVR IP. There isn't much stopping them from soliciting other vendors other than inertia behind an established partner/codebase and existing contracts.
It would make sense. Apple is more invested (and ahead of the game) in their CPU tech. They may license a graphics chip if it is good enough. For that matter they may be able to license Intel graphics tech. People criticize it, but it may actually perform well for the tablet space where there is no dedicated memory.
No, it doesn't make sense. They currently license PowerVR, which is ahead of NVIDIA. The PowerVR 6, which is the same performance level as Logan/Kepler, was available for license last January, and should be shipping in product this year.
And you thought Atom stands a chance against ARM chips Anand? Even Bay Trail (tablet-only chip) has like HALF the GPU performance of Tegra 4...And Merrifield will be completely embarrassed by 20nm ARM chips in 2014 (which presumably includes Tegra 5, too).
Give it a tick and a tock and you will be surprised.
Haswell is Intel's actual attempt at creating a mobile product that meets the expectations of having the Intel moniker with it.
It is doing much better than Ivy did, and the graphics options are better, but the whole thing is still relatively young and juvenile. The next round I think we will see some very impressive results, like I keep telling people, the Atom of tomorrow isn't going to be the Atom of yestedays netbooks.
Serious question, why do mobile GPUs matter? I'm something of a declining gamer who probably last played a serious game around when ME3 came out, and I guess SC2:HotS briefly - and nothing on mobile platforms has excited me. On the other hand, I've accumulated a fat stack of games to play on consoles - the above, and Heavy Rain, Uncharted 3, The Last of Us - but I wouldn't actually play those on, say, a tablet (Heavy Rain maybe?), and even less so a phone.
Infinity Blade was impressive for its time, but I would hardly buy a device to play it, and even in my reduced-passion state I still care more about games than most people.
I think it will become more of a need as phones become the one device that does everything ie. when docked it becomes your desktop and then undocked its a smartphone. Check out the Ubuntu edge to see what I mean.
As things stand, I wouldn't even do that with an Ultrabook-class laptop, never mind a typical (non-Win8 convertible) tablet - and phones are still on a whole other plane entirely...!
Particularly if high-DPI catches on (and I hope it does), my understanding is chips of this size won't have anywhere near the bandwidth to support that use case.
I had never thought of that but Heavy Rain on a tablet would actually be kind of awesome! Too bad that studio is Sony owned (ie only PS games) and the director is a pretentious douche. Nonetheless, they make interesting 'games' and I look forward to playing Beyond Two Souls.
There is always that slim chance it will pop up in the PlayStation store on some "Approved" HTC devices. I know my HTC One X+ got access to it because the Tegra 3 in it, but the selection is a joke if you ask me.
You're right --- the population that care about games is tiny, meaningless to Apple, Samsung, Nokia et al.
The GPU is relevant on iOS, however, because the entire UI is built around "layers" which are essentially the backing store for every view (think controls like buttons, status bars, text windows, etc). These layers are composited together by the GPU to generate the final image you see. For this to give fluid scrolling, that compositing engine needs to be fast (and remember it is driving retina displays, so lots of pixels). Even today (and a lot more so in iOS) each of these views can be subject to frame by frame transformations (scaling, translation, becoming darker, lighter or more or less transparent) to provide the animations that one takes for granted in iOS, and once again we want those to run glitch free.
All this stuff basically pushes the backend (texture) part of the GPU, not geometry and lighting. However something which DOES push geometry (I don't know about lighting) is Apple's flyover view in Maps. [Yeah, yeah, if you feel the need to make some adolescent comment about Apple Maps, please, for the love of god, go act like a child somewhere else.] The flyovers in Maps as of today (for the cities that have them) are, truth be told, PRETTY FREAKING INCREDIBLE. They combine the best features of Google Earth and StreetView, in a UI which is easier to use than either of those predecessors, and which runs a lot smoother than those predecessors. But I am guessing that the Maps 3D views pushes the GPU HW to its limits. They are smooth, yes, but they seem to do a careful job of limiting quality to keep smoothness going. There is no anti-aliasing in play, for example, and the tessellation of irregular objects (most obviously trees) is clearly too coarse. All of which means that if a GPU 2x as fast were available, Apple could make the 3D view in Maps look just that much better.
Finally I suspect (without proof) that Apple also does a large amount of its rendering (ie the stroking and filling of paths, the construction of glyphs, and so on) on the GPU. They've wanted to do it that way for years on OSX, but were always hindered by bwd compatibility concerns. With the chance to start over on iOS, I'd expect they made sure this was a feasible path.
It's nice to see Nvidia make comparisons to their own products. In this case, outperforming an 8800GTX puts things into good perspective when looking at anand's mobile GPU benchmarks.
If Nvidia can deliver Logan "on time" then it truly will be a very, very great SoC. The biggest issue they'll still have to deal with is A15's power hungry design. Wayne's (Tegra 6) custom cores will hopefully be more power conscious like the Krait cores are.
Oh, wow, I am sure this time around their outlandish performance claims will actually come true and Apple, Samsung, Qualcomm, et al will be totally outclassed.
Especially since we all know companies like Apple—whose A6X the "sometime next year" is compared again, just decided to stop developing their mobile CPUs and will ship the next 4 iterations of each product with a A6X variant.
Except PowerVR Series 6 is essentially at parity, and was available for licensing last January and should be shipping this year. Logan isn't supposed to be out until next year.
The only time you can trust anything nVidia says about Tegra and product delivery is when it's been shipping for weeks and owned by consumers. Otherwise, they'll lie to your face right up until the very day they are supposed to be shipping product to consumers and then shrug and say, "Oh, sorry. Yah, not happening. Don't know when exactly it'll ship, but hey, it was totally unexpected. We totally didn't know we were going to miss the date until... just now."
This is why they lost so many contracts to Qualcomm, including the Nexus one. They're just way too unreliable.
Having such great API support and having it be highly compatible with PC gaming and console gaming will only be great when it happens to a company that actually delivers product on time and within promised spec.
And that company will almost certainly be Qualcomm.
Not denying what you said, but today I realized they probably dropped Tegra because Qualcomm supports OpenGL ES 3.0, and it was one of the main features of Android 4.3. Tegra 4 doesn't support it.
Thing is, its coming to market too late if you ask me, and to be perfectly honest I wouldn't expect it to do much more than perform equally with its competition.
THe only thing I can see saving this and making it a huge success would be if the 20nm yields are crap and they can't make enough for demand.
I guess Logan is the reason why Nvidia was so confident in it's IP licensing overtures. Hopefully we'll see it licensed out to some of the bigger players out there. It'd be a shame to see Kepler mobile tech wasted away if it's necessarily tethered to Tegra.
I believe that this is nvidia's marketing push to Apple. Everything nVidia's doing lately (such as opening up mobile GPU licensing) is pointing to nVidia wanting to bed with Apple products. If they are successful, we can expect Apple products to feature nVidia GPU in their late 2014 or 2015 lineups. Their PowerVR chips are lagging behind competitions in terms of cutting edge features like latest OpenGL implementations. Apple is no doubt looking into updating their GPU lines. Nvidia's GPU makes a lot of sense for Apple (experiences with TSMC SoC, lack of Android phones using nVidia's GPU this year).
I'm sure Apple would evaluate it. But I think they'll just wait for Maxwell, a year after that.
I was already thinking Apple might quit Imagination in the next few years, because Imagination will be making their own MIPS chips, and try to get more into the Android world, and I don't think Apple will like that very much.
Plus, there's the technical reasons. I don't think Imagination will match Kepler/Maxwell anytime soon, probably not even in performance, let alone in features. It's really REALLY hard to support all the latest and advanced OpenGL features - see Intel, who's had tons of trouble making proper drivers over the years, and they're still barely at OpenGL 4.0 with Haswell.
I guess it will all depends on how good PowerVR series 6 GPU will be. And who knows, might even use intel SoC in 2015 as the new 4.5w Haswell has been very impressive! If it has the performance near the i5 in Surface Pro, it's a no brainer for Apple to seriously consider Haswell or its successors. One thing for sure, it's gonna be harder than ever for Apple to figure out which road to take in the next 2 years.
You missed the part where PowerVR 6 is competitive with Logan, but was available for licensing last January. Apple might very well investigate this GPU for their 2014 SoC, but then said SoC would be half as powerful as a PowerVR 6.
I watched some of their demos, seems like PowerVR 6 is designed for mostly for Open GL ES 3.0. I hightly doubt that it can compete with Logan for advance OpenGL 4.x features like tessellation. In fact I'm willing to bet that Logan will have tessellation performance that's more than 3 times faster than PowerVR 6.
Until HW is available we can't know, but the problem isn't that it can't compete, it's that Logan will come out a year later. So even if Logan is 3 times faster, it will be a year later, and a revised PowerVR 6+ will be out, instead.
We still haven't heard if this has an integrated-enough baseband for all of the lovely USA carriers. I assume it does based on T4i but I don't trust Nvidia on that point.
Nvidia uses a (quite revolutionary) soft-modem. All bands can be implemented in the software of the modem.
The problem is they've been slow to bring it to market so far (same for Samsung and all the others besides Qualcomm, really). But I doubt this will still be an issue next year.
How does this compare to the newest high end PowerVR graphics? As Nvidia will be licensing this design, have they just dealt a death blow to Imagination technologies’ PowerVR/ARM’s Mali/Qualcom’s Adreno lines? It looks pretty good to me but does it blow the latest and greatest from these other companies out the water or is it all marketing hype? Can anyone with the knowledge fill me in?
Kepler might be unmatched, but there's still one way for Nvidia to fail - and that's going with 28nm. They'd be very very stupid to do that, yet they probably will anyway. It's like they haven't learned their lesson with Tegra 3 vs S4 last year.
But Tegra 6 with Nvidia's own Denver CPU core and Maxwell GPU - that will also arrive at 16nm FinFET just a year after Logan, should really be KILLER. But there's quite a bit of time until then.
Same old crap from Nvidia. Nice comparison matrix too. Everyone will support ES 3.0 by the time this comes out. It is Nvidia who left it out of their most recent chip saying it was pointless.
As usual they are saying a chip that is not due, for a good year at least, is better than a chip that is based on a 4 year old graphics architecture.
At some point in the near future, if Intel doesn't decide to do it themselves, would it be possible for an OEM to licence NVIDIA IP and integrate it into an Intel ultra mobile design?
Sound good, but there might be a few problems. Microsoft wants homogeneity between Windows phones and so they set requirements for the SoC (among other things). Back in the days of WP7 these rules were quite strict which meant an OEM didn't have complete freedom in choosing the SoC. Nowadays, as far as I understand, there's only a set of minimum system requirements that an OEM has to meet. An Intel/NVIDIA SoC would obviously be more than powerful enough, but I wonder whether Microsoft would have anything to say about such an implementation. Furthermore, there's the question of the benefits of all this; while the NT kernel is there, the mobile OS would need some work to make proper use of all that power. Not to mention, having the same architecture and API doesn't immediately translate to running the exact same software from Windows 'PC' on Windows 'mobile'.
A Silvermont/Logan implementation, while great, is not that exciting. Next-gen Silvermont (hopefully wider) + Maxwell on smaller fab would be quite interesting.
Anand: NVIDIA got Logan silicon back from the fabs around 3 weeks ago, making it almost certain that we're dealing with some form of 28nm silicon here and not early 20nm samples.
I believe this is a wrong assumption and that the Logan sample is on 20nm.
As of April 14, 2013 TSMC has had 20mn risk production available. More than enough time for Logan to be produced on 20nm.
Quote: While TSMC has four "flavors" of its 28nm process, there is one 20nm process, 20SoC. "20nm planar HKMG [high-k metal gate] technology has already passed risk production with a very high yield and we are preparing for a very steep ramp in two GIGAFABs," Sun said.
Quote: Sun noted that 20SoC uses "second generation," gate-last HMKG technology and uses 64nm interconnect. Compared to the 28HPM process, it can offer a 20% speed improvement and 30% power reduction, in addition to a 1.9X density increase. Nearly 1,000 TSMC engineers are preparing for a "steep ramp" of this technology.
Those are some fantastic numbers! While I'm all for seeing Logan in future SOCs, I'm not a huge fan of seeing them in Tegra. If developers need to rebuild their mobile games (see Riptide GP for Android) just to optimize it for your chip, you're doing something wrong. I have yet to hear anecdotes about how pleasant an experience it is to port an Android game to a proprietary chip such as Tegra.
With that said, I'd love to see this GPU on other chipsets such as the Exynos, or even Apple's A-series chips. I can't help but think that Nvidia is teasing Apple into a license agreement here. I mean, the very fact that Apple could get more than double the graphics performance on their iPad 4 with a Kepler GPU under the exact same power constraint must ring music to their ears. They could either dramatically increase iPad performance and eliminate any performance woes involved with driving that Retina Display, or they could get a massive boost in battery life while keeping performance levels similar.
Of course, it's Apple's call if they want to swap out Imagination for Nvidia. Let's hope Cupertino isn't too attached to its investment in the former.
Why would they want to swap out with NVIDIA? The PowerVR Series 6, of comparable performance, was available for license last January, and expected to be out in production this year. What you miss is that with the Power VR S6 they can get more than four times the graphics performance of an iPad 4 one year earlier than with a Kepler GPU; why would they wait a year then?
Their isn't a single high end PowerVR 6 such as the G6630 planned for production yet. It's just been paper launched.
- No engineering samples, nothing! - Just numbers on a paper!
Furthermore their peak paper product, the G6630 is slated for a peak performance of ~230 GFLOPS, which is almost half the performance that logan is sporting!
Logan has the winning recipe:
- Most powerful hardware - Good efficiency - Best sofware and driver stack on the market!
This is going to be a major upset on the mobile market.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
141 Comments
Back to Article
mwildtech - Wednesday, July 24, 2013 - link
Mother of God....kukarachee - Wednesday, July 24, 2013 - link
True enough and agreed. Not really that impressive.More so considering nvidia's long history of misrepresenting power consumption and performance on upcoming parts. No doubt by the time we see this available and shipping there will be much better SOC options from the likes of Qualcomm etc.
Disappointing.
mmrezaie - Wednesday, July 24, 2013 - link
you have issues, man!dragonsqrrl - Wednesday, July 24, 2013 - link
haters gonna hateEWP - Wednesday, July 24, 2013 - link
Well said. In the mobile space nVidia has consistently fallen far short of their claims. Their CEO jus seems like such a blowhard with his long list of outrageous claims.kukarachee - Thursday, July 25, 2013 - link
Yes, we are almost certainly going to see another Tegra 4 play out here. Far over-promise and massively under-deliver.Tegra 4 has been an enormous bomb with sparse design wins to the point they threw most of what they had in a handheld Android gaming device that had to receive a $100 price cut before even hitting the market and it -STILL- is not selling with little interest in Shield, extending the failure of Tegra 4.
Tegra 4 has been crushed in the market by Samsung and Qualcomm and by the time we even see this part actually -available *AND* shipping- it will be already be old news like Tegra 4 was. Never mind that nvidia is getting their butt kicked in design wins for their SOCs by the Qualcomm monolith. Likely due to the failure and issues with Tegra 3 for units such as the Nexus which ditched nvidia for Qualcomm.
Tegra 4 is in nothing but a crappy over-heating tablet and a failed handheld gaming device sold by nvidia themselves. I expect the same for this part if they again can't manage to deliver on their promises -AS WELL- as deliver on time. Two things they have yet to manage...
Yojimbo - Friday, July 26, 2013 - link
Throughout its history, Nvidia has generally operated like a locomotive. Whenever they have entered a new sector they make a splash by making promises, and in the first couple generations they come up short of their promises. But with successive generations they've gained ground, and executed more consistently than their competitors, until they are among the leaders of the sector. It happened with desktop 3d graphics and with mainboard chipsets. There is stiff competition in the mobile SOC space, but I wouldn't be so quick to dismiss them there, either.michael2k - Friday, July 26, 2013 - link
Yet by your admission there is every reason to dismiss them until they catch up. The problem is that they are competing against three companies that have consistently executed for years now; PowerVR, Qualcomm, ARM, and Apple. They have to consistently outperform Apple and PowerVR if they wish to gain Apple's business, they have to consistently outperform ARM and PowerVR to gain Qualcomm's business, and they have done neither yet.Yojimbo - Friday, July 26, 2013 - link
My post was a reply to kukarachee's post. He is dismissing the announced generation out-of-hand because of their past relative performance in the sector. The important distinction here is that kukarachee is attempting to make a prediction about a future iteration based solely on Nvidia's previous results in this sector. My argument is that there is NOT every reason to dismiss Nvidia, because they have a track record of succeeding at playing that same game of catch up. I will admit that it's two conflicting track records, and one must decide which one applies in this instance, but I will certainly not admit that there is every reason to dismiss them until they catch up. Whichever track record you find more appropriate to pay attention to is your own choice, however.michael2k - Saturday, July 27, 2013 - link
There's no indication they've caught up yet, however. The PowerVR 6 was demonstrated at GDC, earlier this April, and is expected to be out shortly. Logan won't be out until next year, which means by definition it is going to be behind.Yojimbo - Saturday, July 27, 2013 - link
I have to disagree. I am not sure about your analysis that anything that comes out later is "by definition behind." Rogue is not promising 400 GFLOPS, is it? And it's expected to be out in Q4 2013 isn't it? What we know for sure is that Logan leapfrogs Rogue in terms of API compliance. Suppose Logan devices come out one year later than Rogue devices, and Logan performs upwards of 50% better than Rogue. Would that not be considered being "caught up?" And I never even said that with Logan, Nvidia would be "caught up." I said that Nvidia has a history of catching up. So my claim is that the advantage PowerVR holds over Nvidia immediately after Rogue and Logan, taking into account adjustments made for when the product is first made available to the market, will be less than the advantage that PowerVR has held over Nvidia for the duration of the previous generation, with similar adjustments made. I further claim that there is a good chance that eventually (0, 1, 2 generations?) that gap will be negative. My argument here has nothing to do with Logan being better than Rogue. It is a refutation of kukurachee's dismissal of Nvidia's claims for Logan's performance that he based on the GPUs that Nvidia had on their previous generations SOCs, and the amount of hype/interest they tried to generate for these SOCs.michael2k - Sunday, July 28, 2013 - link
Yes, actually, Rogue is promising 400 GFLOPS, it promises OpenGL ES 3*/2/1.1, OpenGL 3.x/4.x, and full WHQL-compliant DirectX9.3/10, with certain family members extending their capabilities to DirectX11.1 functionality.For Logan to be upwards of 50% better than Rogue it would have to be a 600 GFLOP chip since PowerVR 6 is expected to hit 400 GFLOP. What would not be caught up is if Logan was a 400 GFLOP chip released next year. You see, Rogue is intended to hit 1 TF, possibly in full powered tablet form factors, so for Logan to truly best Rogue it would need to hit 1.5TF in a tablet form factor.
I don't believe Logan is specced that high.
Yojimbo - Sunday, July 28, 2013 - link
The information I found on Rogue listed 250 GFLOPs. Where did you find 400 GFLOPs?michael2k - Monday, July 29, 2013 - link
http://www.imgtec.com/powervr/sgx_series6.aspPowerVR Series6 GPUs can deliver 20x or more of the performance of current generation GPU cores targeting comparable markets. => iPad currently is about 70GF, so a Series 6 implementation would be 1.4 TF
PowerVR Series6 GPU cores are designed to offer computing performance exceeding 100GFLOPS (gigaFLOPS) and reaching the TFLOPS (teraFLOPS) range => 100 GF is the bottom of the expected range
My point holds though that the Series 6 is designed to go over 1TF in performance, which is more than enough to match NVIDIA's Logan for the foreseeable future.
TheJian - Monday, August 5, 2013 - link
You do realize IMG.L exists in phones because they couldn't cut it vs. AMD/NV in discrete gpus right? They have been relegated to cheapo devices by AMD/NV and are broke compared to NV. Apple apparently doesn't give them much profits. They had to borrow money just to buy 100mil mips cpus (borrowed 20mil), meanwhile NV buys companies like icera for 330mil cash, July29th bought Portland Group (compiler teams just got better at NV), with I'm assuming CASH again as they have 3.75B in the bank:http://blogs.nvidia.com/blog/2013/07/29/portland/
I'm guessing this hurts OpenCL some also, while boosting NV in HPC even more. PGI owns 10% of compilers, while Intel owns ~25%. So NV just gained a huge leg up here with this deal.
Back to socs, none of the competing socs have had to be GOOD at GAMES until now and not even really yet as we're just getting unreal 3 games coming and announced. How do you think this plays out vs. a team with 20yrs of gaming development work on drivers & with the devs? Every game dev knows AMD/NV hardware inside out for games. That can't be said of any SOC maker. All of those devs have created games on hardware that is about to come to socs next year (no new work needed, they already know kepler). We are leaving the era of minecraft & other crap and entering unreal 3/unreal 4 on socs. Good luck to the competition. If AMD can survive long enough to get their soc out they may be a huge player eventually also, but currently I'd bet on NV doing some major damage with T5/T6 etc (if not T4/T4i). T4 is just to hold people off until the real war starts next year as games are just gearing up on android, and T4i will finally get them into phones in greater numbers (adding more devices in the wild based on a very good gpu set).
That being said, T4 is already roughly equal to all others. Only S800 looks to beat its gpu (I don't count apple, sales dropping and not housing anything but apple hardware anyway). I don't see rogue making any speeches about perf, just features and I don't see them running unreal 4 demos :) I don't hear anything from Mali either. S800 appears to have a good gpu (330) but it remains to be seen if games will fully optimize for it. I see no tegrazone type optimizations so far on Adreno. There is no Adrenozone (whatever, you get the point). All soc makers are about to be seriously upset by gaming becoming #1 on them. Before they needed a good 2D gui etc, not much more to run stupid stuff like minecraft or tetris type junk. We'll see how they all fare in stuff like Hawken etc types. I'd bet on NV/AMD with NV obviously being in the driver seat here, with cash no debt etc helping them and already on T5 by the time AMD ships A1 or whatever they call it.
Soc venders better get their gaming chops up to snuff in a hurry or the NV GPU train will run them over in the next 3yrs. NV gpus will be optimized for again and again on desktop and that tech will creep into socs a year or two later over and over. All games made for PC's will start to creep to android on the very same hardware they were made for already a few years before on desktops. Devs will start to make advanced games on HTML5 (html6 etc eventually), OpenGL, WebGL, OpenCL, Java etc to ease portability making directx less needed. At that point Intel/MS are both in trouble (we're already heading there). IF you aim a game at directx you have a much harder time porting to everywhere else. That same game made on OpenGL ports easily (same with Html5 etc) to all devices.
In a few short years we'll be looking at a 500w normal PC type box with Denver or whatever in it (Cortex A57 I'd guess) with an NV discrete card for graphics. With another 3yrs of games under the android belts it should make for a pretty good home PC with no WINTEL at all in it. Google just needs to get an office package for home out that does most of what office does by that time and there is no need for windows/office right (something aimed more at home users)? They need to court Adobe now to port it to android before Denver or Boulders launches (and anyone else aiming at desktops) or come up with their own content creation suite I guess. A lot of content comes from Adobe's suite. You need this on android with an office package also to convert many home users and work pc's. I see horrible stock prices for Intel/MS in less than 5yrs if google courts Adobe and packages a nice office product for home users. They can give away the OS/Office until MS/Intel bleed to death. All they need is games, adobe, NV/AMD gpus (choose either or both) in a tower with any soc vendor. They will make money on ads not the hardware or software MS/Intel need to make profits on. Margins will tank for these two (witness MS's drop on RT already, and Intel's profits tanking), and android will come to the front as a decent alternative to WINTEL. It’s kind of funny Google/(insert soc vendor name here) are about to turn MS into netscape. They gave IE away until netscape bled to death. The same is about to happen to them…ROFL. At the very least Wintel won’t be the same in under 5yrs. While Intel now has android, it would be unwise of Google to help Intel push them around (like samsung does to some degree). Much better to make a soc vender be the next Intel but much weaker. Google can push a soc vender around, not a samsung or Intel (not as easily anyway).
I’ll remind you that PowerVR has already been killed once by NV (and ATI at the time). That's why they are in phones not PC's :) Don't forget that. I don't see this playing out different this time either. Phones/tablets etc are accidentally moving into NV/AMD territory (gaming) now and with huge volumes making it a no brainer to compete here for these two. It's like the entire market is coming to AMD/NV's doorstep like never before. Like I said, good luck to Qcom/Imagination/Arm/Samsung trying to get good at gaming overnight. There is no Qcom "gaming evolved" or "The way it's meant to be played" games yet. NV/AMD have had 20yrs of doing it, and devs have the same amount of time in experience with their hardware (which with die shrinks will just slide into phones/tablets shortly with less cores, but it’s the same tech!). If Qcom/Samsung (or even Apple) don't buy AMD soon they are stupid. It is the best defensive move they can make vs the NV gaming juggernaut coming and if that happens I'll likely sell my NV stock shortly after...LOL. Apple should do this to keep android on NV from becoming the dominate gaming platform. MS could buy them also, but I don't think Intel can get away with it quite yet (though they should be able to, since ARM is about to become their top cpu competitor). When an ARM soc (or whatever) gets into a 500w PC like box Intel can make that move to buy AMD for gpus. Until then I don't think they can without the FTC saying "umm, what?". I'm also not sure Samsung can legally buy AMD (security reasons for a non American company having that tech?).
Anyway, it's not just about perf, it's also about optimizations (such as NV buying PGI for even MORE cuda prowess for dev tools). IE - Cuda just got a boost. More prep work for Denver/Boulder is afoot :) Just one more piece of the puzzle so to speak (the software stack). For google, they don't care who wins the soc race, as long as it pushes Android/Chrome as the Wintel replacement in the future for the dominant gaming platform and eventually as a PC replacement also for at least some work (assuming they court Adobe and maybe a few others for content creation on android/chrome or whatever). I’m guessing Denver/Boulder (and their competition) are pipelined to run at 3.5-4ghz in 70-100w envelopes. It would make no sense to compete with Intel’s Haswell/Broadwell etc in 8w. A desktop chip needs to operate at ~70w etc just like AMD/Intel to make the race even. You have a PSU, heatsink/fans and a HUGE box in a PC, it would be stupid not to crank power up to use them all. A Cortex-A57 chip at 4ghz should be very interesting with say, a Volta discrete card in the box :) A free OS/Office suite on it, with say $100 A57 4ghz chip should be easy to sell. A soc chip for notebooks, and a stripped gpu-less version for desktops with discrete cards sounds great. I like the future.
My 2c. Heck call it a buck....whatever...LOL.
ancientarcher - Wednesday, August 7, 2013 - link
$10.0 even!!A very good explanation of your theory. not in the mainstream yet, and Intel (and Anandtech) keep on pushing how great its latest and great chip is that you can get for only $500. This monopoly is going to crash and burn. MS, as you said tried to go the ARM way (the system just wont support monopoly profits for two monopolies, maybe one but not two) but didn't execute well
Denver won't necessarily be ported to the PC form factor, but who knows. It will surely change the landscape in the smartphone/tablet segments. I also think IMG, Mali and Adreno can learn to compete in games. Yes, they would not have had 20 years, but they do have a lot of momentum and the sheer number of devices running these platforms will push developers to code for them.
Mondozai - Saturday, September 7, 2013 - link
Who says they are measuring it against the iPad 4(which is 80 gigaflops btw)?Their wording is so vague that they can choose a midrange model and say that's it's more representative of the GPU performance of most GPU's(at the time that statement was made), which would be correct.
You're making some pretty wild guesses.
MySchizoBuddy - Saturday, August 10, 2013 - link
mainboard chipsets?they exited that market. so you only have one example of Nvidia operating like a locomotive.
beck2050 - Monday, August 12, 2013 - link
Agreed. It looks like Logan will finally put Nvidia on the map in the mobile world big time, and after that it should get very interesting.CharonPDX - Wednesday, July 24, 2013 - link
Even if the power consumption is double what they claim, and the performance half - it's *STILL* massively impressive.The big question is: Do Intel and PowerVR have similar tricks up their sleeve?
fteoath64 - Thursday, July 25, 2013 - link
"Do Intel and PowerVR have similar tricks up their sleeve?". Not Intel but PVR likely with Rouge. The key competitor here is Qualcomm with their Adreno 330 powerhouse. That has time to evolve to better efficiency and tuning. But competition on the mobile side is heating up with this announcement and potentially Nvidia's shilf to quickly make this part available. Bigger tablets and UltraLight notebooks can benefit from it rather than settle with Intel's junk iGP.The disappointment of the market is no real Tegra4 and Tegra4i products in the market after such delays while Qualcomm churns out model after model of products for OEMs.
ocre - Thursday, July 25, 2013 - link
huh? I really dont know what you people are talking about. Tegra 4s closest competition is the snapdragon 800 which is almost nonexistent in the market. Tegra 4 is beating the SD800 in almost every way. Including design wins!!!"While some will argue that Nvidia's design wins are on the weak side, they have more announced design wins than their competitor, Qualcomm with the Snapdragon 800"
http://www.brightsideofnews.com/news/2013/7/23/dou...
so while you guys might always have negative views towards nvidia, its not nearly as bad as you guys try to make it out to be
phoenix_rizzen - Thursday, December 19, 2013 - link
4 months later, and there's dozens of phones and tablets out with Snapdragon S800 SoCs inside; while there are ... how many with Tegra4?Doesn't matter how many "design wins" you have on paper if none of them ever reach the marketplace.
michael2k - Thursday, July 25, 2013 - link
Yes. PowerVR has had similar performance available to licensees last January. Their first chips are expected to be out this year.lmcd - Wednesday, July 24, 2013 - link
We won't necessarily see the PowerVR implementations to beat this but the fact is that they exist and likely could beat this.PowerVR needs reference implementations like all hell.
Qualcomm will beat this.
Doesn't 1-2 GCN cores do well against this, too?
Finally, Intel is already close, and will probably get this performance soon after T5 comes out, with lower power thanks to process advantage.
TheJian - Monday, August 5, 2013 - link
If we won't see a powervr to "necessarily" beat this why discuss it? They don't exist until...umm...They EXIST. :) If I can't BUY it, it doesn't exist :)I have a dream in my head of a 50000 gpu core soc that is less than .1w and 10000Tflops gpu power. Do I now have the fastest core on the planet? No, it doesn't EXIST until you can BUY it. Your first sentence really made me laugh. The fact is they don't exist. I'll give you powervr chips that beat this don't exist YET, but that's still just a MAYBE they will...ONE DAY. They'll face bankruptcy if nobody goes for mips tech. They clearly don't make enough from gpus on every apple device to survive the next few gens. TI exited due to not buying icera (or any modem), and Imagination may exit due to mips meaning nothing (could be wrong, we'll see - deving for mips is a tough sell I'd think vs arm/x86). I suspect they'll be bought or die in under 5yrs. They have a market cap of 1B if memory serves and make about 30mil/yr (only 2012, less previously, we'll see for 2013). Good luck keeping up with everyone else making 12x or far MORE than that in the same business. You're dead or bought. Apple could by them for the equivalent of a song (not sure why they haven't) but maybe they're making their own to dump them eventually. They bought PA Semi for the inhouse cpu. Do they have a gpu coming? Otherwise why not purchase IMG.L? They are already in everything you make. Odd, whatever.
The process advantage is disappearing quickly for Intel. Shortly Intel will have to out CHIP the competition, not out fab them. It's a new ballgame from here on out for Intel and the competition's profits blows Intel's away (IE Samsung making almost Intel's yearly profits in a Q). You can't outspend the competition on fabs when they make 3.5x what you do per year and can spend it all on FAB R&D until you go broke. Intel raised 6B in bonds to fund a buyback. It's a sign on weakness IMHO. They will have to cut dividends soon also (last I checked they can't fund them for more than a few years and even less with dropping profits). Again a sign of weakness.
Let me know when powervr (in something other than Apple products) beats a T4, let alone T5. I can't see a product yet :) Anything COULD beat NV. But until something DOES in a PRODUCT who cares? I'll need to see shipping product doing it first. Just like people ranting on lack of T4 wins (well duh, it just came out), I don't see S800 announcements right and left either. Why? It's NOT out either (not shipping in enough volume for massive announcements anyway, not yet). T4 just started shipping in July to ODM's.
I can predict everything will beat a T4 in 5 yrs...LOL. But none of it is shipping NOW. I can buy a toshiba tablet with T4 now (and HP also), though I wouldn't want the toshiba until quality is better. They appear to have design problems with their tablet. But it's shipping now. Qcom will beat this, but WHEN? And how do you know? S800 won't do it vs. T5 (possibly T4 but we don't know in what or if at all yet). Have you seen benchmarks for the next rev past S800? You are special. You work for Qcom or something and have a T5 in your hands? I'll reserve anything regarding AMD for when they actually SHIP a soc.
Also note Intel has hit 14nm snags recently and is already delaying chips. I agree with your PowerVR needs something for reference out now though ;) Where is this stuff? Are they that far away from shipping something? No xmas for them then, nor S800 if it doesn't get out the door in volume soon. We see T4 announcements from Toshiba, HP (both have T4's for sale), Asus, etc so they will be in xmas stuff for sure. How many S800 devices will make it for xmas? Are they shipping to ODM's yet? I know xperia Z is supposedly coming Sept so it has to be shipping but if anyone has some links to verify that...Then again when they ship devices with 3 different chips they can claim shipping but not really ship your desired chip for months. Exynos 5420 is ramping in Aug this month, but I guess we'll have to wait for Note3 to see T628 MP6 perf. They are claiming it's 2x faster than 5410 PowerVR544MP3. But I'll wait for benchmarks of a T628 before I believe this and battery life to go with that benchmark. How long before it is in a product? Note3 is expected Sept, but it has a long list of supposed chips, so not sure which has the S800 or 5420 or when these come specifically (or if I even have the correct chips, I saw a claim of S600 for one model - who knows at this point). I wish companies would just ship ONE soc per product or name them something different like note3.1 for s800, note 3.2 for 5420 etc etc. Something to differentiate better for customers who don't read every review on the planet. Would the REAL note3 please stand up...LOL. Granted its really about the included modem for specific regions but it's still confusing for a lot of people.
Intel is moving to their own soc gpus. I'll believe they beat people when I see it in anything gpu :) NV had samples of 20nm socs ages ago, so Intel will need 14nm to do any damage here and broadwell is slipping to 2015. When we will see a 14nm soc if they are having problems? All socs will be 20nm next year, so Intel's 22nm will be facing these (T5 etc) not far after silvermont/valleyview/baytrail release (tablet by xmas, phone next year?). All have sped up 14nm plans, so they won't be too far behind a DELAYED intel 14nm process especially samsung swimming in billions to dump into their fab tech. T6 is supposed to be 16nm finfet early 2015, so not much of a process advantage for Intel at 14nm then and T6 has maxwell in it. Intel better have a great gpu and this won't be a basic arm port for cpu either it is in house DENVER :)
http://www.digitimes.com/news/a20130418VL200.html
So 20nm risk production started 2013 Q1, and 16nm risk starts by end of year.
"Chang indicated that TSMC already moved its 20nm process to risk production in the first quarter of 2013. As for 16nm FinFET, the node will be ready for risk production by the year-end, Chang said."
So depending on who T6 comes from it's either 16nm (tsmc, it's at least this) or 14nm (sammy?)? Either way Intel faces stiff competition from here out. The fab party is over. Time to make some unbeatable chips or pay the price. You won't outFAB the competition any more. Intel is about to release 22nm tablets/phones and everyone else is about to do 20nm shortly after that. Not sure how that's a real victory. I don't see one at 14nm either. We may actually see Intel get beaten to 10nm if samsung keeps pounding out 7.9B+ in profits per Q (Q1 was 7.9B, 8.3B i think for Q2). This kind of profit can kill Intel's fab lead quickly. Unless they seriously boost profits samsung has 10nm in the bag and that's assuming Intel wins to 14nm which has yet to be proven and is showing problems or broadwell wouldn't be delayed right? Will apple be fabbing for people shortly? They certainly have the cash to build 3-4 fabs today easily and laugh in 3yrs. I'm surprised it hasn't already happened (though they did buy one, why not start some new ones while you're rich?). Well rumor is they bought UMC, but it's probably true. I'd be doing that and building 3 450mm fabs for the future (what's that 20bil for 3 of them? Even at 25-30bil who cares, apple has ~130). Buy IMG.L and start pumping out gpus in one fab, socs in another and memory in a 3rd for all your devices in 3yrs. Samsung makes as much as they do on a device because 60+% of a phone/tablet comes from IN HOUSE.
As far as I can see this xmas for tablets is owned by S800/T4. Xmas Phones are owned by S600 I think, with maybe enough volume for S800 to make it into some things. T4i is Q1 and will miss xmas, and I don't think 5420 will make it into anything volume wise for xmas as it's just ramping this month.
T5 will be out before July 2014 on 20nm (so will a 20nm S800 etc). Intel won't be seeing 14nm until 2015 in volume if they crack 2014 at all. I don't call that SOON after T5. Unless you want to say VERY soon after Intel puts out a 14nm soc it will face T6 etc.
phoenix_rizzen - Thursday, December 19, 2013 - link
Here it is, Christmas time, and your predictions are a little off. There are many S800 SoCs in phones, many S600 SoCs in phones, nearly all the good tablets have S800 SoCs, and no Tegra4 to be seen.If you can't even predict the next 4 months (July-Dec), why should we listen to your predictions for the next 3 years? :)
Tegra4 is a flop. Tegra4i is an admission by nVidia that Cortex-A15 can't cut it in phones. And Tegra5 won't be much better (if we even see it in anything by this time next year).
Zoolookuk - Sunday, July 28, 2013 - link
I don't know why people are picking on your comment - faster than A6? I'd hope so, given A6 is form 2012, and this is arriving in 2014. The question is whether it'll be faster than A8, not something from 18 months ago.karasaj - Wednesday, July 24, 2013 - link
Wow. Even if we account for a new iPad delivering 2x the performance of the last one (unlikely?) Keller is still ahead, potentially in both power and performance. This is crazy!michael2k - Wednesday, July 24, 2013 - link
Except it will be first among equals. PowerVR 6 is designed to scale up to 1TF, and realistically ship in 200GF to 600GF configurations, making the Logan nothing special.Krysto - Wednesday, July 24, 2013 - link
And you think Kepler isn't? Kepler is scalable as far as Nvidia's highest end chips. Plus that argument is a pretty weak one, since what matters is how much can you scale under a certain power envelope - not how much in TOTAL. That's completely irrelevant. It just means Series 6 will - eventually (years from now) get to 1 TF. But so will other chips.Also Kepler IS something special. Here's why - full OpenGL 4.4 support. Imagination, Qualcomm and ARM have barely gotten around to implement OpenGL ES 3.0, and even that took them until now basically, to implement the proper drivers for them, and obtain the certification.
The point is, that the same performance level, Kepler optimized games should look a lot better than games on any other chip.
Krysto - Wednesday, July 24, 2013 - link
Oh, and even Intel barely got around to implement OpenGL 4.0 in Haswell - the PC chip. So don't expect the others to support the full and latest OpenGL 4.4 anytime soon.1Angelreloaded - Wednesday, July 24, 2013 - link
Incase you don't know this already, sandy/ivy/haswell integrated graphics are absolutely terrible, in most case when used aside your dedicated GPU it lowers performance instead of increasing it. On its own it might be fine for a few things here and there, but even that is terrible.ExarKun333 - Wednesday, July 24, 2013 - link
Your GPU knowledge is laughable. Integrated GPUs are disabled when running a dedicated. You obviously are a troll or are plain ignroant about all CPU/GPU issues. others please ignore this user's posts...Flunk - Wednesday, July 24, 2013 - link
Actually, you're wrong. In Nvidia Optimus/AMD Enduro the discrete GPU draws in the integrated graphic's frame buffer even when in discrete mode. Also, on desktops it is possible to enable your integrated graphics and discrete GPU at the same time to support more monitors.lmcd - Wednesday, July 24, 2013 - link
Difference is only present on Enduro. Optimus is almost identical. The performance hit with multiple monitors on multiple devices is likely a Windows thing and a framebuffer sync thing. Not an actual problem with Intel graphics.So I know we know Exar is wrong, but his point that Ang's information is irrelevant is in fact correct, for this mobile situation anyway.
happycamperjack - Wednesday, July 24, 2013 - link
If you take a look at Microsoft's Surface Pro's benchmark number, you'd be shocked by how many times faster its GPU is compared to latest iPad and Androids phones. Because I was!texasti89 - Thursday, July 25, 2013 - link
"many times faster" is not the right metric .. they all about the same when you look at the perf/watt.happycamperjack - Friday, July 26, 2013 - link
I'm just dismissing the guy who's saying that integrated graphics from intel is absolutely terrible. It's certainly not terrible compared to the GPU in top mobile SoC.Refuge - Thursday, July 25, 2013 - link
Good sir, android (90% of what this will be running I would assume) JUST NOW with 4.3 which is yet to appear on a device (Nexus 7 R2 is first) just now implemented OpenGL ES 3.0 support.Look, being the first to support new API's is great, but being the only one gains you nothing because nobody is going to program for the 1% that's just bad business.
ltcommanderdata - Wednesday, July 24, 2013 - link
Even with Keplar's advanced features support, we'll have to see how much effort developers put into optimizing games for Keplar. Most mobile games are developed first for iOS and then ported over to other platforms.http://venturebeat.com/2013/07/23/ea-made-more-sal...
EA just announced that the iOS App Store is EA's biggest retail distribution channel, bigger than other mobile app stores, their own Origin and other PC and console channels. So games being designed for and optimized for iOS/PowerVR GPUs are a given, because that's where the market is. nVidia will have to actively convince mobile developers to support their additional features.
What's more, I don't believe any mobile OS officially supports any version of desktop OpenGL much less OpenGL 4.4. Android 4.3 just announced incorporation of OpenGL ES 3.0. Rather than OpenGL 4.4, the most common implementations will likely have Keplar additional features exposed as a bunch of OpenGL ES 3.0 extensions, which may also limit adoption.
lmcd - Wednesday, July 24, 2013 - link
It isn't like EA is popular with hardcore gamers, so no one truly uses Origin. Their console games mostly suck, their PC games mostly suck.Who is surprised?
Scannall - Thursday, July 25, 2013 - link
I'd just point out that hard core gamers are really a niche market. World wide sales of the Xbox 360 and the PS4 over 8 years is about 70 million each. Not exactly huge.Refuge - Thursday, July 25, 2013 - link
Thank you... Someone who understands!To you good sir... I agree. :)
Kill16by9TN - Friday, July 26, 2013 - link
Johnny's name is KeplEer, not KeplAr!prophet001 - Monday, July 29, 2013 - link
Kepleer hunh.eanazag - Wednesday, July 24, 2013 - link
I am in agreement. Features are just as much a part of the conversation as power and performance. The software side should also be good as Nvidia has done a good job with this. Software/drivers being my biggest gripe for Intel.This is definitely drumming up business for their IP licensing. Squarely aimed at Apple. Intel should take note also because Atom's graphics have always sucked.
So this is big and good news for customers. Thinking smartphone only is too small.
My question is does this do well power-wise for video. We will have to wait and see. I'm glad to see Nvidia hanging around when the idea of a GPU only company hanging around this long was pretty pessimistic. They have been good at redefining themselves, which is very tough for companies. (cough, cough Kodak and HP with Palm)
Refuge - Thursday, July 25, 2013 - link
Their support may be great, but nobody is going to program for Keplar features if they don't hold a solid market share, and I mean VERY solid, mirroring Apple iPhones. I use them as an example because while Androids hold most of the market, they are a mixed back of fragmentation.Also, the new Atoms coming out have almost nothing in common with the old Atoms. People keep making references to the old ones, which I agree were worthless even for a netbook, but the new ones I actually have some high hopes for.
michael2k - Wednesday, July 24, 2013 - link
Correct, I don't think Kepler is all that special. Nvidia has lost a lot of credibility in the mobile space by being unable to take the crown for three years straight. In any case, if you're going to be quoting Kepler PR, this is PowerVR's list of accomplishments for the 6 series GPU:Delivering the best performance in both GFLOPS/mm2 and GFLOPS/mW, PowerVR Series6 GPUs can deliver 20x or more of the performance of current generation GPU cores targeting comparable markets. This is enabled by an architecture that is around 5x more efficient than previous generations. => At the same power level expect 5x the perf, approximately 350GF
All members of the Series6 family support all features of the latest graphics APIs including OpenGL ES 'Halti'*, OpenGL 3.x/4.x, OpenCL 1.x and DirectX10 with certain family members extending their capabilities to full WHQL-compliant DirectX11.1 functionality. => Just like Logan
PowerVR Series6 GPU cores are available for licensing now. => Posted last January! Expect then to see this in SoC this year, not years from now.
lmcd - Wednesday, July 24, 2013 - link
Whaddya mean? Kepler doesn't scale to those either. It only clockspeed scales and is probably close to headroom, while adding another core will make the die WAY too big.No, you're wrong. Also, ES 3.0 is pretty damn close to 4.2 compliance. So don't act like it's a huge jump.
Finally, don't forget AMD's GCN is competing in this space and has proven itself very relevant.
ollienightly - Thursday, July 25, 2013 - link
And what do we need OpenGL 4.4 for? More wasted silicon or power? You do realize NO ONE would ever develop OpenGL 4.4 games for the Tegra 5 EVER.djgandy - Thursday, July 25, 2013 - link
Nvidia haven't even implemented OpenGL ES 3.0 on anything that ships or is near to shipping! And now they are going completely to the other end of the spectrum and doing a fully DX11 compliant chip!? Oh and the reason there is no OpenGL 4.4 on other mobile GPU's is because there is no point in doing it. Why burn area for an API you cannot use.Keep smoking that Nvidia marketing bs
ltcommanderdata - Wednesday, July 24, 2013 - link
The expectation is that this year is Apple's "S" refresh which last time with the iPad 2/iPhone 4S brought 9x (iPad 2)/7x (iPhone 4S) claims of GPU performance increases over the previous generation by Apple which I believed translated into ~5x real world performance improvements. As such a 5x theoretical GPU performance increase by nVidia in 2014 is not out of line with what Apple could be delivering later in 2013. As michael2k points out, PowerVR 6 Rogue is certainly scalable to those performance levels. We'll have to see them implemented in actual devices to see how well they compare within realistic power constraints to really compare their effectiveness of course.mmrezaie - Wednesday, July 24, 2013 - link
I also think the end result will be like what we have with AMD. AMD offers very good hardware, but nvidia pushes more on api/driver lvl. So if PowerVR and Kepler become comparable I wonder what kind of competition we will see in Driver/API/Game Engine front.ltcommanderdata - Wednesday, July 24, 2013 - link
http://gfxbench.com/compare.jsp?D1=Apple+iPhone+5&...Apple actually writes their own drivers for their PowerVR GPUs which seem to be very efficient considering the iPhone 5's SGX543MP3 actually achieves double the triangle throughput of the Galaxy S4's SGX544MP3 despite the Galaxy S4 having higher clock speed and memory bandwidth. So there is the comparison between PowerVR reference drivers which is probably what most device manufacturers are using vs Keplar and Apple PowerVR drivers vs Keplar. It's a safe bet Apple will continue to make gaming a focus on iOS given that's a major part of the App Store, so they'll continue putting effort into GPU driver performance optimization. nVidia of course has a long track record with driver optimization so we'll definitely see lots of competition in this area.
In terms of features, Apple continues expose new features in PowerVR GPUs through OpenGL ES extensions. They've already implemented or are going to implement in iOS 7 a number OpenGL ES 3.0 features on existing Series 5/5XT GPUs including sync objects, instanced rendering, and additional texture formats, etc. The major untapped feature is multiple render target support which should be coming since the EXT extension has now been finalized. Series6/Rogue is DX10 compliant, so I expect we'll be seeing those additional features like geometry shaders exposed through OpenGL ES 3.0 extensions. So it'll be a comparison between OpenGL 3.3 vs OpenGL 4.4 exposed through OpenGL ES 3.0 extensions.
One advantage Apple does have is that they can regularly release performance optimizations and new features in regular iOS updates which see rapid adoption throughout the userbase so developers can count on the features and use them. nVidia likely has more trouble pushing out new drivers broadly on Android. As such, it's good that nVidia is aiming high to begin with in terms of performance and features since they have less opportunity to gradually increase them over time through driver updates.
name99 - Wednesday, July 24, 2013 - link
You also have to remember that most of Apple's customers (like most customers of all phones and tablets) don't give a damn about the most demanding games. Apple is happy to help out game developers, happy even to boast about them occasionally, but games are not central to what Apple cares about in a GPU.What Apple DOES care about is enhancing the entire UI. This means that (especially as they get more control over their entire SOC) they're going to be doing more and more things that will be invisible if you just look at specs and traditional benchmarks. For example, it means that they will pick and choose whatever features are valuable in Open GL4.4 and integrate those into their devices (maybe, maybe not, exposing API).
In this context, for example, the shared buffers of Open GL4.4 would be extremely valuable to Layer Manager, and they have enough control over the CPU (and I assume could negotiate enough control over the GPU) to implement what's necessary to get this working on their systems. It could be there, invisible to external programmers, but making Layer Manager operations (ie the entire damn UI) run 20% faster and with 20% less energy.
A second example. Apple, on both OSX and iOS, have constantly tried to push image processing ever closer to what is "theoretically" correct rather than computationally easy. So they would like all image processing to be done in something like Lab space, and only converted to gamma corrected RGB at the very stage of display. They do this on OSX, on iOS with less CPU power they do as much as they can in SRGB space. But if you had HW that ran the transformations from RGB (with various profiles) or YUV (again with various profiles) to Lab and back, you could expand this for usage everywhere. It's the kind of small thing (like kerning or ligatures) that many people won't notice, but it will make all images (and especially manipulated/processed images) on iOS look just that much better --- but to really pull it off requires dedicated HW.
Point is: I am sure Apple will continue to throw more transistors at the GPU part of their SOC. But they won't necessarily be throwing those transistors at the same things nVidia is throwing them at. They may even seem to lag nVidia in benchmarks, but it would be foolish to map that onto assuming the devices feel slower than nVidia devices.
happycamperjack - Wednesday, July 24, 2013 - link
I think your opinion about Apple's position on gaming is 4 years old. 75% of App store revenue come from games. iPhone 3GS, iPhone 4S, iPhone 5, iPads all featuring GPU, not necessarily CPU, that were well ahead of competitions when it came out. So Apple doesn't just CARE about games, they effing love games/money! It didn't start out this way of course, but they learn that's what their customers wanted through app downloads and sale, so that's what they are focusing now.lmcd - Wednesday, July 24, 2013 - link
Apple's mentality there is accurate for Macs but not iOS devices.happycamperjack - Wednesday, July 24, 2013 - link
Well I can't blame them. Most computer games are written and optimized in DirectX. With the mac market still a small fraction of of PC, it's hard to see developers change their DirectX stance. Thus it's hard to see Apple change their stance on computer games as well. It's more likely for them to make a console based on iOS then push for Mac games.TheJian - Monday, August 5, 2013 - link
The app store doesn't make a ton of money, neither does google's currently. Revenue and profit are two different things also. They are both really just designed to push the platforms and devices. Maybe they will be huge money makers in the end as games get more evolved and rise in price, but I doubt Apple makes 1B on the store vs something like 39B from the hardware. In 2011 Piper Jaffray said they made ~$239 mil but apple divulges nothing so no real proof of what they pay in costs to run the store or what they make after that. Apple has only claimed to run at break even (oppenheimer 2010). If numbers are correct, they make ~$3bil in revenue on downloads but how much is left for profits after server costs, maintaining them, bandwidth etc.Apple cares about games when they start OPTIMIZING for their products. Currently they are inspiring nobody to do this. I see nobody but Nvidia doing anything to make things better looking on their hardware. The fact that apple's store has 75% of revenue from games just says game devs own most of the revenue from the store, and that consoles will suffer because many game elsewhere (which shows in wiiu/vita sales). It says nothing about APPLE themselves promoting games development.
Links to Apple proof of game investing please. So far all I see is "you should be thankful we let you put your game on IOS" rather than "Here' please make it BETTER on IOS & A6 and we'll help you". With NV they send help to the game dev to optimize for their hardware (literally send people to help out). Check out Ouya comments etc. If apple is doing this I'm not aware of it and have seen no articles from any dev saying "apple was great to work with, gave tons of help to optimize for A6" etc...AMD/NV have courted game devs forever (ok, 20yrs). I have seen nothing from Apple and devs didn't even jump on macs conversions until they hit 10% share (which is dropping now, so I expect it to drop again for devs). Apple appears to be learning nothing. If they had made games a HUGE part of their priority 3yrs ago android would have never taken off. They played the enterprise card properly but so far they have wasted the game card. Enterprise brought down Rimm (everyone gaining exchange), but only google seems to be helping to court devs (and NV helping). Shield wasn't on display in main view for all at Google IO for nothing. Google seems to understand gaming is needed to kill windows, take over some PC sales and kill directx. All of these are done with games on android. Couple that strategy with a free OS and Free office package at some point for home users (in a decent package) and WINTEL isn't needed. Qcom/NV/Samsung will provide the soc power and they google provides the platform to run on them. All of us win in games if we get off directx. Devs make more money because of easy porting thus having more money for risk on better games since they have so many to target. Any decent game should make money just due to the sheer # of devices to sell to at that point.
djboxbaba - Wednesday, July 24, 2013 - link
This is ALL relative, does anyone realize that this will not be released till 2014? ......2014? the A8/A8X (or equivalent) will be released by then, as well as Qualcomm's latest snapdragon processors. Don't place your expectations too high.Scannall - Wednesday, July 24, 2013 - link
I am wondering if they will be too late to market again. PowerVR ssries 6 (Rogue) devices should be shipping any time.http://en.wikipedia.org/wiki/PowerVR#Series_6_.28R...
NLPsajeeth - Wednesday, July 24, 2013 - link
Seems like the most similar desktop part is the GeForce GT 630 OEMhttp://www.geforce.com/hardware/desktop-gpus/gefor...
And laptop somewhere between 720M and 730M.
randomhkkid - Wednesday, July 24, 2013 - link
Think about the implications if they can get just shy of 720m - 730m performance at about 5W what can they do with laptop GPU at around 35W O.oSpunjji - Wednesday, July 24, 2013 - link
We already have Kepler at that power level, though. So nothing that they haven't done already...randomhkkid - Wednesday, July 24, 2013 - link
I mean that performance currently hasn't scaled with the power, i'm assuming that the current 720m has a power envelope of around 35W and yet there is a ~5W kepler GPU that has around the same performance. If they could scale it up while keeping the efficiency it would be a sight to behold.DanNeely - Wednesday, July 24, 2013 - link
I think you're way high on the 720M TDP. It's a GF117 part. The other GF117 parts are 12.5W (710M) or 15W (620M and 625M).35W is probably a bit too high for the 730M too, it's a GK208 part and the Quadro 510M/610M (only mobile GK208's I can find TDP for) run at 30W,.
Jaybus - Wednesday, July 24, 2013 - link
For the same architecture, it is true that performance scales linearly with clock speed. However it is not true that power consumption scales linearly with clock speed. 5 W at 500 MHz does not equate to 25 W at 2.5 GHz.roberto.tomas - Wednesday, July 24, 2013 - link
This mostly looks very rosy for Nvidia's future IP in mobile space. If Logan comes at 28nm though, it *does* face the same issue as Tegra 3 had — unless it comes to market not in 2014 but in 2013.As a rough estimate, at 28nm and from the power shown, to achieve max performance they need 25w on the gpu power meter. That's a bit of a shame, because that is clearly outside the bounds of minimal cooling.
On the plus side, that is a tonnage of power, which companies can tune to use as much or little of as they want, all the way up into the absurd levels — rather like the PowerVR Rogue. In that sense it looks good for NVidia.
sontin - Wednesday, July 24, 2013 - link
What? If they can get 76GFLOPs/s in less than 1W they do not need 25W for 5x the performance...And the remark about 28m: Every company will supply constraint with 20nm in 2014. And Tegra 3 was a huge success for nVidia (50% revenue increase in FY2013) that they can sell it for a lower price (cheaper wafer) and in huge numbers (more wafers, better yields).
And do not forget: 20nm will only bring a 30% lower power consumption. So if you can archive this with an optimize architectures there is no benefit for using the 20nm from the start for Tegra.
kukarachee - Wednesday, July 24, 2013 - link
Yes indeed. This looks to be another SOC that will tank just like Tegra 4 has totally crashed and burned (literally ;) ) with little success in the market. Tegra 4 was a flop, with power consumption and heat output so poor it needs a fan to keep it cool in some cases.Nvidia should stop burning money with the continued Tegra failures.
This also ignores the 1000lb gorilla that is Intel wading into the SOC space, who no one, not even the big players like Qualcomm or Samsung, much less a small-fry like nvidia, will have much success competing against once they get rolling with their SOCs built using their exclusive and superior foundry technology.
Eric S - Wednesday, July 24, 2013 - link
Not that Tegra and Apple processors really compete, but Apple is rumored to be teaming up with Intel for access to their fabs.lmcd - Wednesday, July 24, 2013 - link
Apple would be considered a competitor and probably won't be accepted, I believe, unless Intel is giving access to old fabs. Read the official information before you post.Intel will probably never make ARM SoCs since they left the business.
Krysto - Thursday, July 25, 2013 - link
Next year 3 Ghz 20nm ARM chips are rumored to arrive. Intel can't compete with that with Atom. Plus Atom's GPU is still no match for the average ARM GPU, let alone for Kepler.Refuge - Thursday, July 25, 2013 - link
I think they still have another tick and tock before Intel starts rampaging about in SOC land. But you do bring forward a valid point good sir.Krysto - Wednesday, July 24, 2013 - link
I definitely agree Logan needs to be 20nm next year, and I'm not sure Nvidia will do that. I do know they intend to have Tegra 6 after it, at 16nm FinFET.rocketscience315 - Wednesday, July 24, 2013 - link
The feature list is disingenuous as CUDA is nV specific (lots of others have OpenCL) and DX11 has no relevance except perhaps MS Surface.Spunjji - Wednesday, July 24, 2013 - link
Disingenuous? A marketing slide? Nevar ;DQuite true, of course. Unless they're planning to foist CUDA upon the mobile world it's not much of a draw; especially not in what will be a rather weak implementation of their compute architecture.
DanNeely - Wednesday, July 24, 2013 - link
What about WP8.x? I'd assume it's using DX instead of OGL for 3d.rocketscience315 - Wednesday, July 24, 2013 - link
You're probably right about DX running on WP8 but I presume they can run OGL also, and developers would use OGL as it would make some code cross-platform. I'd bet that DX is a rather tiny market relative to OGL on mobile.Refuge - Thursday, July 25, 2013 - link
I can't possibly see WP8.x running DX, unless you are referring to the surface pro.I mean the option is there, but lone API's don't get developer attention, unless my limited knowledge is misleading me, that would make porting things to WP from iOS/Android more work than it could possibly be worth.
Much the same as the Cell processor in PS3, never got used fully because it was a bear to code for.
Stuka87 - Wednesday, July 24, 2013 - link
Arg, anytime I read "Performance Data" from nVidia about an unreleased product my head hurts.Voldenuit - Wednesday, July 24, 2013 - link
It's... the best at what it does?SNIKT!
jjj - Wednesday, July 24, 2013 - link
"we haven't even seen Tegra 4 devices on the market yet"The Toshiba tablets have been in stores for a while now.
Devfarce - Wednesday, July 24, 2013 - link
They compare it to iPad but this could very well be in the next iPad down the road. Not as Logan but as licensed IP. Apple could have licensed this a year ago and the world wouldnt have been any wiser with NDAs desipite the recent announcement. They build their own chips and license PowerVR IP. There isn't much stopping them from soliciting other vendors other than inertia behind an established partner/codebase and existing contracts.Stuka87 - Wednesday, July 24, 2013 - link
Except the next PowerVR which will be shipping any time now will already match Logan (From the looks of things at least).Scannall - Wednesday, July 24, 2013 - link
Plus they own a good sized chunk of PowerVR.Eric S - Wednesday, July 24, 2013 - link
It would make sense. Apple is more invested (and ahead of the game) in their CPU tech. They may license a graphics chip if it is good enough. For that matter they may be able to license Intel graphics tech. People criticize it, but it may actually perform well for the tablet space where there is no dedicated memory.michael2k - Thursday, July 25, 2013 - link
No, it doesn't make sense. They currently license PowerVR, which is ahead of NVIDIA. The PowerVR 6, which is the same performance level as Logan/Kepler, was available for license last January, and should be shipping in product this year.Krysto - Wednesday, July 24, 2013 - link
And you thought Atom stands a chance against ARM chips Anand? Even Bay Trail (tablet-only chip) has like HALF the GPU performance of Tegra 4...And Merrifield will be completely embarrassed by 20nm ARM chips in 2014 (which presumably includes Tegra 5, too).Refuge - Thursday, July 25, 2013 - link
Give it a tick and a tock and you will be surprised.Haswell is Intel's actual attempt at creating a mobile product that meets the expectations of having the Intel moniker with it.
It is doing much better than Ivy did, and the graphics options are better, but the whole thing is still relatively young and juvenile. The next round I think we will see some very impressive results, like I keep telling people, the Atom of tomorrow isn't going to be the Atom of yestedays netbooks.
rwei - Wednesday, July 24, 2013 - link
Serious question, why do mobile GPUs matter? I'm something of a declining gamer who probably last played a serious game around when ME3 came out, and I guess SC2:HotS briefly - and nothing on mobile platforms has excited me. On the other hand, I've accumulated a fat stack of games to play on consoles - the above, and Heavy Rain, Uncharted 3, The Last of Us - but I wouldn't actually play those on, say, a tablet (Heavy Rain maybe?), and even less so a phone.Infinity Blade was impressive for its time, but I would hardly buy a device to play it, and even in my reduced-passion state I still care more about games than most people.
randomhkkid - Wednesday, July 24, 2013 - link
I think it will become more of a need as phones become the one device that does everything ie. when docked it becomes your desktop and then undocked its a smartphone. Check out the Ubuntu edge to see what I mean.rwei - Wednesday, July 24, 2013 - link
As things stand, I wouldn't even do that with an Ultrabook-class laptop, never mind a typical (non-Win8 convertible) tablet - and phones are still on a whole other plane entirely...!Particularly if high-DPI catches on (and I hope it does), my understanding is chips of this size won't have anywhere near the bandwidth to support that use case.
blacks329 - Wednesday, July 24, 2013 - link
I had never thought of that but Heavy Rain on a tablet would actually be kind of awesome! Too bad that studio is Sony owned (ie only PS games) and the director is a pretentious douche. Nonetheless, they make interesting 'games' and I look forward to playing Beyond Two Souls.Refuge - Thursday, July 25, 2013 - link
There is always that slim chance it will pop up in the PlayStation store on some "Approved" HTC devices. I know my HTC One X+ got access to it because the Tegra 3 in it, but the selection is a joke if you ask me.name99 - Wednesday, July 24, 2013 - link
You're right --- the population that care about games is tiny, meaningless to Apple, Samsung, Nokia et al.The GPU is relevant on iOS, however, because the entire UI is built around "layers" which are essentially the backing store for every view (think controls like buttons, status bars, text windows, etc). These layers are composited together by the GPU to generate the final image you see. For this to give fluid scrolling, that compositing engine needs to be fast (and remember it is driving retina displays, so lots of pixels). Even today (and a lot more so in iOS) each of these views can be subject to frame by frame transformations (scaling, translation, becoming darker, lighter or more or less transparent) to provide the animations that one takes for granted in iOS, and once again we want those to run glitch free.
All this stuff basically pushes the backend (texture) part of the GPU, not geometry and lighting. However something which DOES push geometry (I don't know about lighting) is Apple's flyover view in Maps. [Yeah, yeah, if you feel the need to make some adolescent comment about Apple Maps, please, for the love of god, go act like a child somewhere else.] The flyovers in Maps as of today (for the cities that have them) are, truth be told, PRETTY FREAKING INCREDIBLE. They combine the best features of Google Earth and StreetView, in a UI which is easier to use than either of those predecessors, and which runs a lot smoother than those predecessors. But I am guessing that the Maps 3D views pushes the GPU HW to its limits. They are smooth, yes, but they seem to do a careful job of limiting quality to keep smoothness going. There is no anti-aliasing in play, for example, and the tessellation of irregular objects (most obviously trees) is clearly too coarse. All of which means that if a GPU 2x as fast were available, Apple could make the 3D view in Maps look just that much better.
Finally I suspect (without proof) that Apple also does a large amount of its rendering (ie the stroking and filling of paths, the construction of glyphs, and so on) on the GPU. They've wanted to do it that way for years on OSX, but were always hindered by bwd compatibility concerns. With the chance to start over on iOS, I'd expect they made sure this was a feasible path.
tviceman - Wednesday, July 24, 2013 - link
It's nice to see Nvidia make comparisons to their own products. In this case, outperforming an 8800GTX puts things into good perspective when looking at anand's mobile GPU benchmarks.If Nvidia can deliver Logan "on time" then it truly will be a very, very great SoC. The biggest issue they'll still have to deal with is A15's power hungry design. Wayne's (Tegra 6) custom cores will hopefully be more power conscious like the Krait cores are.
xype - Wednesday, July 24, 2013 - link
Oh, wow, I am sure this time around their outlandish performance claims will actually come true and Apple, Samsung, Qualcomm, et al will be totally outclassed.Especially since we all know companies like Apple—whose A6X the "sometime next year" is compared again, just decided to stop developing their mobile CPUs and will ship the next 4 iterations of each product with a A6X variant.
cdripper2 - Wednesday, July 24, 2013 - link
Forgotten about Lucid Virtu have we? It would seem we SHOULD ignore your further posts ;)cdripper2 - Wednesday, July 24, 2013 - link
that was @ ExarKun333. post didn't work quite right there....Concillian - Wednesday, July 24, 2013 - link
This is great news. Feature parity with PCs is enormous. Should be great for nVidia, and likely very bad news for PowerVR.Stuka87 - Wednesday, July 24, 2013 - link
Glad to see them ditch Tegra, which was outdated the day it was released. performance numbers should go way up with the Qualcomm chip.Stuka87 - Wednesday, July 24, 2013 - link
arg, wrong window, ignore my last comment :/michael2k - Wednesday, July 24, 2013 - link
Except PowerVR Series 6 is essentially at parity, and was available for licensing last January and should be shipping this year. Logan isn't supposed to be out until next year.HisDivineOrder - Wednesday, July 24, 2013 - link
The only time you can trust anything nVidia says about Tegra and product delivery is when it's been shipping for weeks and owned by consumers. Otherwise, they'll lie to your face right up until the very day they are supposed to be shipping product to consumers and then shrug and say, "Oh, sorry. Yah, not happening. Don't know when exactly it'll ship, but hey, it was totally unexpected. We totally didn't know we were going to miss the date until... just now."This is why they lost so many contracts to Qualcomm, including the Nexus one. They're just way too unreliable.
Having such great API support and having it be highly compatible with PC gaming and console gaming will only be great when it happens to a company that actually delivers product on time and within promised spec.
And that company will almost certainly be Qualcomm.
Krysto - Wednesday, July 24, 2013 - link
Not denying what you said, but today I realized they probably dropped Tegra because Qualcomm supports OpenGL ES 3.0, and it was one of the main features of Android 4.3. Tegra 4 doesn't support it.lmcd - Wednesday, July 24, 2013 - link
That has to be why, in my opinion.HighTech4US - Wednesday, July 24, 2013 - link
Haters gotta Hate.lmcd - Wednesday, July 24, 2013 - link
Not like Qualcomm dropped the ball on Windows drivers or anything...Of course it turned out that market was worthless but you should pay a little more attention!
jasonelmore - Wednesday, July 24, 2013 - link
Keplar has the best performance per watt in the world. No matter which platform your talking about.This only makes sense, now keplar is fully scale-able.
Refuge - Thursday, July 25, 2013 - link
Thing is, its coming to market too late if you ask me, and to be perfectly honest I wouldn't expect it to do much more than perform equally with its competition.THe only thing I can see saving this and making it a huge success would be if the 20nm yields are crap and they can't make enough for demand.
Arnulf - Wednesday, July 24, 2013 - link
The name is Kepler, as in Johannes Kepler:http://en.wikipedia.org/wiki/Johannes_Kepler
takeship - Wednesday, July 24, 2013 - link
See also for reference: Tegra 2 performance claims, Tegra 3 power claims.godrilla - Wednesday, July 24, 2013 - link
This thing can run crysis 1 six years later 100x more efficient than my 6 year old 8800 ultra, now that is impressive.chizow - Wednesday, July 24, 2013 - link
I guess Logan is the reason why Nvidia was so confident in it's IP licensing overtures. Hopefully we'll see it licensed out to some of the bigger players out there. It'd be a shame to see Kepler mobile tech wasted away if it's necessarily tethered to Tegra.happycamperjack - Wednesday, July 24, 2013 - link
I believe that this is nvidia's marketing push to Apple. Everything nVidia's doing lately (such as opening up mobile GPU licensing) is pointing to nVidia wanting to bed with Apple products. If they are successful, we can expect Apple products to feature nVidia GPU in their late 2014 or 2015 lineups. Their PowerVR chips are lagging behind competitions in terms of cutting edge features like latest OpenGL implementations. Apple is no doubt looking into updating their GPU lines. Nvidia's GPU makes a lot of sense for Apple (experiences with TSMC SoC, lack of Android phones using nVidia's GPU this year).Krysto - Thursday, July 25, 2013 - link
I'm sure Apple would evaluate it. But I think they'll just wait for Maxwell, a year after that.I was already thinking Apple might quit Imagination in the next few years, because Imagination will be making their own MIPS chips, and try to get more into the Android world, and I don't think Apple will like that very much.
Plus, there's the technical reasons. I don't think Imagination will match Kepler/Maxwell anytime soon, probably not even in performance, let alone in features. It's really REALLY hard to support all the latest and advanced OpenGL features - see Intel, who's had tons of trouble making proper drivers over the years, and they're still barely at OpenGL 4.0 with Haswell.
happycamperjack - Thursday, July 25, 2013 - link
I guess it will all depends on how good PowerVR series 6 GPU will be. And who knows, might even use intel SoC in 2015 as the new 4.5w Haswell has been very impressive! If it has the performance near the i5 in Surface Pro, it's a no brainer for Apple to seriously consider Haswell or its successors. One thing for sure, it's gonna be harder than ever for Apple to figure out which road to take in the next 2 years.watersb - Friday, July 26, 2013 - link
Apple currently has the resources to take all the "roads", then pick the one that works best for them. All before releasing a product.Their culture of secrecy is mostly because they do not want clumsy ideas presented to the end-user.
michael2k - Thursday, July 25, 2013 - link
You missed the part where PowerVR 6 is competitive with Logan, but was available for licensing last January. Apple might very well investigate this GPU for their 2014 SoC, but then said SoC would be half as powerful as a PowerVR 6.happycamperjack - Friday, July 26, 2013 - link
I watched some of their demos, seems like PowerVR 6 is designed for mostly for Open GL ES 3.0. I hightly doubt that it can compete with Logan for advance OpenGL 4.x features like tessellation. In fact I'm willing to bet that Logan will have tessellation performance that's more than 3 times faster than PowerVR 6.michael2k - Friday, July 26, 2013 - link
Until HW is available we can't know, but the problem isn't that it can't compete, it's that Logan will come out a year later. So even if Logan is 3 times faster, it will be a year later, and a revised PowerVR 6+ will be out, instead.lmcd - Wednesday, July 24, 2013 - link
We still haven't heard if this has an integrated-enough baseband for all of the lovely USA carriers. I assume it does based on T4i but I don't trust Nvidia on that point.Krysto - Thursday, July 25, 2013 - link
Nvidia uses a (quite revolutionary) soft-modem. All bands can be implemented in the software of the modem.The problem is they've been slow to bring it to market so far (same for Samsung and all the others besides Qualcomm, really). But I doubt this will still be an issue next year.
andyroo77 - Thursday, July 25, 2013 - link
How does this compare to the newest high end PowerVR graphics?As Nvidia will be licensing this design, have they just dealt a death blow to Imagination technologies’ PowerVR/ARM’s Mali/Qualcom’s Adreno lines? It looks pretty good to me but does it blow the latest and greatest from these other companies out the water or is it all marketing hype? Can anyone with the knowledge fill me in?
Krysto - Thursday, July 25, 2013 - link
Kepler might be unmatched, but there's still one way for Nvidia to fail - and that's going with 28nm. They'd be very very stupid to do that, yet they probably will anyway. It's like they haven't learned their lesson with Tegra 3 vs S4 last year.But Tegra 6 with Nvidia's own Denver CPU core and Maxwell GPU - that will also arrive at 16nm FinFET just a year after Logan, should really be KILLER. But there's quite a bit of time until then.
Refuge - Thursday, July 25, 2013 - link
It is waaaaay too late in the story for them to change from 28nm.If they do that then they might as well skip this Tegra and go to the next.
michael2k - Thursday, July 25, 2013 - link
Power VR 6 is the same performance and power envelope as this, but was available to partners last January and should be shipping soon.djgandy - Thursday, July 25, 2013 - link
Same old crap from Nvidia. Nice comparison matrix too. Everyone will support ES 3.0 by the time this comes out. It is Nvidia who left it out of their most recent chip saying it was pointless.As usual they are saying a chip that is not due, for a good year at least, is better than a chip that is based on a 4 year old graphics architecture.
yhselp - Thursday, July 25, 2013 - link
At some point in the near future, if Intel doesn't decide to do it themselves, would it be possible for an OEM to licence NVIDIA IP and integrate it into an Intel ultra mobile design?JlHADJOE - Friday, July 26, 2013 - link
Now that would be interesting.Maybe Nokia would do it for an upcoming W8 phone? Silvermont Atom + Logan. Bringing Windows, Intel and Nvidia from your desktop to your phone.
yhselp - Friday, July 26, 2013 - link
Sound good, but there might be a few problems. Microsoft wants homogeneity between Windows phones and so they set requirements for the SoC (among other things). Back in the days of WP7 these rules were quite strict which meant an OEM didn't have complete freedom in choosing the SoC. Nowadays, as far as I understand, there's only a set of minimum system requirements that an OEM has to meet. An Intel/NVIDIA SoC would obviously be more than powerful enough, but I wonder whether Microsoft would have anything to say about such an implementation. Furthermore, there's the question of the benefits of all this; while the NT kernel is there, the mobile OS would need some work to make proper use of all that power. Not to mention, having the same architecture and API doesn't immediately translate to running the exact same software from Windows 'PC' on Windows 'mobile'.A Silvermont/Logan implementation, while great, is not that exciting. Next-gen Silvermont (hopefully wider) + Maxwell on smaller fab would be quite interesting.
HighTech4US - Thursday, July 25, 2013 - link
Anand: NVIDIA got Logan silicon back from the fabs around 3 weeks ago, making it almost certain that we're dealing with some form of 28nm silicon here and not early 20nm samples.I believe this is a wrong assumption and that the Logan sample is on 20nm.
As of April 14, 2013 TSMC has had 20mn risk production available. More than enough time for Logan to be produced on 20nm.
http://www.cadence.com/Community/blogs/ii/archive/...
Quote: While TSMC has four "flavors" of its 28nm process, there is one 20nm process, 20SoC. "20nm planar HKMG [high-k metal gate] technology has already passed risk production with a very high yield and we are preparing for a very steep ramp in two GIGAFABs," Sun said.
Quote: Sun noted that 20SoC uses "second generation," gate-last HMKG technology and uses 64nm interconnect. Compared to the 28HPM process, it can offer a 20% speed improvement and 30% power reduction, in addition to a 1.9X density increase. Nearly 1,000 TSMC engineers are preparing for a "steep ramp" of this technology.
Refuge - Thursday, July 25, 2013 - link
I think you have high hopes good sir, but I disagree with you on this one.I would be struck silly if this came on a 20nm process when released.
watersb - Friday, July 26, 2013 - link
Crikey.wizfactor - Friday, July 26, 2013 - link
Those are some fantastic numbers! While I'm all for seeing Logan in future SOCs, I'm not a huge fan of seeing them in Tegra. If developers need to rebuild their mobile games (see Riptide GP for Android) just to optimize it for your chip, you're doing something wrong. I have yet to hear anecdotes about how pleasant an experience it is to port an Android game to a proprietary chip such as Tegra.With that said, I'd love to see this GPU on other chipsets such as the Exynos, or even Apple's A-series chips. I can't help but think that Nvidia is teasing Apple into a license agreement here. I mean, the very fact that Apple could get more than double the graphics performance on their iPad 4 with a Kepler GPU under the exact same power constraint must ring music to their ears. They could either dramatically increase iPad performance and eliminate any performance woes involved with driving that Retina Display, or they could get a massive boost in battery life while keeping performance levels similar.
Of course, it's Apple's call if they want to swap out Imagination for Nvidia. Let's hope Cupertino isn't too attached to its investment in the former.
michael2k - Friday, July 26, 2013 - link
Why would they want to swap out with NVIDIA? The PowerVR Series 6, of comparable performance, was available for license last January, and expected to be out in production this year. What you miss is that with the Power VR S6 they can get more than four times the graphics performance of an iPad 4 one year earlier than with a Kepler GPU; why would they wait a year then?jipe4153 - Tuesday, July 30, 2013 - link
Their isn't a single high end PowerVR 6 such as the G6630 planned for production yet. It's just been paper launched.- No engineering samples, nothing!
- Just numbers on a paper!
Furthermore their peak paper product, the G6630 is slated for a peak performance of ~230 GFLOPS, which is almost half the performance that logan is sporting!
Logan has the winning recipe:
- Most powerful hardware
- Good efficiency
- Best sofware and driver stack on the market!
This is going to be a major upset on the mobile market.
phoenix_rizzen - Friday, December 20, 2013 - link
And yet, Apple shipped with a series 6 GPU and not an nVidia one. Where's the major upset?brianlew0827 - Friday, August 2, 2013 - link
I guess michael2k is paid by or probably worked for Imgtech since every single post by him is to praise PowerVR.