Look how nice for them to let that fat Mac guy experience the Windows PC VR Reality he looked so excited about that Windows experience and he even tells us he definitelly wants to go there to the Windows World of Things. Poor guy I guess he just got sick playing that old 2D Solitaire game on his brand new useless Mac the only available game for his machine. He got so excited like he never saw a computer before, well that might be true after all since Macs are more like a fools joke than a computer, 1 April companie fools joke. I'm not Chatbor nor Cortana just a human being telling you the truth laugh laugh laugh ...... insert more laugh in here ...... no user input needed ...... this is more laughable than ever.
He personally designed the Apple II, including hardware and software. This guy pushed technology and progress forward during his time. Thinking of the wrong person fool.
So if I point to all those problems with Nvidia's latest WHQL drivers, your comment will be "Those are not Nvidia's PCs"? Can you even realize how stupid your comment is? I bet you don't.
"So if I point to all those problems with Nvidia's latest WHQL drivers, your comment will be "Those are not Nvidia's PCs"?" That's a pretty terrible analogy, although it does help me to understand your original comment a little better.
"Can you even realize how stupid your comment is?" Like a lack of async compute resulting in Woz feeling dizzy during a VR demo? Do you realize how stupid that is?
You know, I think you actually began to illustrate the connection best in your own comment. No ASYNC Compute >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Woz feeling dizzy. There are so many other things that could have a greater impact on motion sickness, like the headset itself, or his natural susceptibility to motion sickness. But you place a particular emphasis on "No ASYNC Compute". I think your agenda is showing.
Not even close to being as dense as what nvidia is doing.. Knights Landing is something like 8 billion transistors. Nvidia's is way over that even accounting for the RAM
Exactly. Lets hope they are saving the best for the last, and it won't be something like, "Pascal? Oops, sorry. it would take another year, but here is our boss back on stage with his antics, waving around a mock-up. Isn't he just adorable?"
"The Mars demo was running on a Titan, but Jen-Hsun believes that's not enough. Need more performance to better physically simulate light" Maybe use a GPU that supports async compute ;)
So it looks like GP100 does FP64 at 1/2 FP32, exciting. No wonder it's such a massive die even on 16nm. That's going to be a huge performance boost for double precision workloads over GK110.
Right, unless he does something after this cars segment, we have some news.
No real Pascal hardware shown. Just an implication that some partners have engineering samples of a very high end card targeting everything (DP, SP, HP, DL) costing $10k+ a card. Not that P100 is what any of us consumers would be interested in for gaming, so GP104? GP106? No, not shown.
"Not that P100 is what any of us consumers would be interested in for gaming" I don't think that's true. The popularity of the 980Ti, 780/Ti, 580/570, and Titan's seem to suggest otherwise.
"so GP104? GP106? No, not shown." Not surprising. x104/106 GPUs typically aren't announced at GTC.
*IF the rumor of AMD going with GloFo for 100% of their upcoming GPU die is fact...Anyone think that might have been a mistake, considering GloFo's history of bringing up new nodes/yield problems and the fact that TSM already has 600mm2 GP100 in volume production. I do understand yields are not as a detrimental with such high margin parts(HPC parts), but none the less still seems to be a pretty big accomplishment because the yields still must be somewhat reasonable one would assume.
AMD is probably trying to get GloFo process working, because they have arrangement to buy minimum number of wafers anyway. So going GloFo instead of TSMC means AMD is not paying effectively twice for their wafers production.
Hey dude Jen-Hsun how are you ? You always seemed like a cool guy and look how far in the rabbit hole you bought us in :p just kidding a little. What's the fuss about deep learning and A.I. ? Did the machines and A.I. learned their lesson already or do we have to beat them second time around ? lol Now seriously talking could you guys bassicaly put that deep learning A.I. machines at good use and see if they can develop themselves into more powerfull machines ? Like creating a deep learning A.I. Super Inteligent Computer Powerhouse that can basically self develop into becoming more inteligent by itself ? as well as developing more powerfull Nvidia chips and sofware ? Using such an advanced A.I. that could self develop chips and software instead of using humans ? Be careful though as such an advanced inteligent A.I. might corrupt your human minds take in control and spell the end of the humanity.
done a lot of reading into this and TBH I'm not that impressed at all - its very FP16 and FP64 centric , and 10.6 tflops? furyx is 8.6 already - single precision ; the extra transistors have gone into double precision - but FP32 (which is important to gamers ) takes a knock. Its reminds me of the AMD cpu`s with its dual FP16 into FP32.
this isn't a card for gamers. Deep learning for sure , but GP1xx for pc will be a different beast.
also - the 5 miracle breakthroughs - we already have 16nm products on the market and interposer-connected memory? Fury from last year would like to ask you about that!
no wonder he looked so nervous with those blatant lies!
Yes, because when comparing AMD and Nvidia GPUs, we all know there's a direct correlation between theoretical floating point performance and real world performance in games...
There seems to be some question (on the forum at least) about whether this one could actually run games at all. Its definitely at least very heavily specalised towards the FP16/64 stuff and they could have instituted that split the writers here were wondering about last year.
There'll definitely be a really big gamer oriented card at some point of course, we'll have to wait and see whether its an eventual cut down version of this or some more specalised variant.
There's way more money to be had in the FP16/64 segment than in the game segment. From here on out, gamers will be taking a back seat...just business, nothing personal.
The corporate audio visual market is estimated to be above 100 billion in 2017. Nvidia is very well positioned. I remember how many idiot AMD fanbois were sneering at CUDA and calling it DOA. Whatever these guys do, the opposite is the right.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
48 Comments
Back to Article
moaklerqt - Tuesday, April 5, 2016 - link
What are you talking about? The Wifi is working fine in the convention center!crimsonson - Tuesday, April 5, 2016 - link
He is talking about HIS connection, not yours?moaklerqt - Tuesday, April 5, 2016 - link
I guess I'll make my humor more obvious next time...soccergirl99 - Tuesday, April 5, 2016 - link
Is that a question?? :)JamesDean17 - Wednesday, April 6, 2016 - link
Look how nice for them to let that fat Mac guy experience the Windows PC VR Reality he looked so excited about that Windows experience and he even tells us he definitelly wants to go there to the Windows World of Things. Poor guy I guess he just got sick playing that old 2D Solitaire game on his brand new useless Mac the only available game for his machine.He got so excited like he never saw a computer before, well that might be true after all since Macs are more like a fools joke than a computer, 1 April companie fools joke.
I'm not Chatbor nor Cortana just a human being telling you the truth laugh laugh laugh ...... insert more laugh in here ...... no user input needed ...... this is more laughable than ever.
D. Lister - Wednesday, April 6, 2016 - link
Switch to decaf before you end up with a cardiac episode.Xanavi - Friday, April 15, 2016 - link
He personally designed the Apple II, including hardware and software. This guy pushed technology and progress forward during his time. Thinking of the wrong person fool.osxandwindows - Tuesday, April 5, 2016 - link
Go nvidia.xthetenth - Tuesday, April 5, 2016 - link
You just have all the bad opinions don't you?niva - Tuesday, April 5, 2016 - link
He most certainly does not have all the bad opinions.This doesn't answer the questions whether all of his opinions are bad though!
marc1000 - Friday, April 8, 2016 - link
go 16nm, I'm tired of waiting the 28nm era end.yannigr2 - Tuesday, April 5, 2016 - link
GTX cards - No ASYNC Compute >>>>>>> Woz after a few minutes of Nvidia VR "I feel dizzy"Southrncomfortjm - Tuesday, April 5, 2016 - link
To be fair, he was spinning himself in circles. Pretty sure if you do that in real life you'd get pretty dizzy.dragonsqrrl - Tuesday, April 5, 2016 - link
That's not an Nvidia headset, try harder.yannigr2 - Wednesday, April 6, 2016 - link
So if I point to all those problems with Nvidia's latest WHQL drivers, your comment will be "Those are not Nvidia's PCs"? Can you even realize how stupid your comment is? I bet you don't.dragonsqrrl - Wednesday, April 6, 2016 - link
"So if I point to all those problems with Nvidia's latest WHQL drivers, your comment will be "Those are not Nvidia's PCs"?"That's a pretty terrible analogy, although it does help me to understand your original comment a little better.
"Can you even realize how stupid your comment is?"
Like a lack of async compute resulting in Woz feeling dizzy during a VR demo? Do you realize how stupid that is?
You know, I think you actually began to illustrate the connection best in your own comment. No ASYNC Compute >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Woz feeling dizzy. There are so many other things that could have a greater impact on motion sickness, like the headset itself, or his natural susceptibility to motion sickness. But you place a particular emphasis on "No ASYNC Compute". I think your agenda is showing.
D. Lister - Wednesday, April 6, 2016 - link
lol, +1FourEyedGeek - Tuesday, April 5, 2016 - link
Go Nivea!extide - Tuesday, April 5, 2016 - link
GP100 NOT the biggest finfet chip ever made, Knights Landing is, ~683mm^2, on Intel 14nm, already shipping.CrazyElf - Tuesday, April 5, 2016 - link
I believe Tukwila at 698mm^2 was the largest die ever made.jasonelmore - Tuesday, April 5, 2016 - link
Not even close to being as dense as what nvidia is doing.. Knights Landing is something like 8 billion transistors. Nvidia's is way over that even accounting for the RAMJoyTech - Tuesday, April 5, 2016 - link
No gaming GPUs??dragonsqrrl - Tuesday, April 5, 2016 - link
GP100 probably will be a 'gaming GPU'Southrncomfortjm - Tuesday, April 5, 2016 - link
Can we get to the consumer GPUs already? Deep learning, cool and all, but I want to know about the Pascal GPUs I want to buy!D. Lister - Wednesday, April 6, 2016 - link
Exactly. Lets hope they are saving the best for the last, and it won't be something like, "Pascal? Oops, sorry. it would take another year, but here is our boss back on stage with his antics, waving around a mock-up. Isn't he just adorable?"lefty2 - Tuesday, April 5, 2016 - link
"The Mars demo was running on a Titan, but Jen-Hsun believes that's not enough. Need more performance to better physically simulate light"Maybe use a GPU that supports async compute ;)
DigitalFreak - Tuesday, April 5, 2016 - link
When you only have one single advantage over the competition, I guess you crow about it as much as possible.jasonelmore - Tuesday, April 5, 2016 - link
Dont forget PRICE! lmaoanubis44 - Wednesday, April 6, 2016 - link
Well, it is a pretty damned serious advantage if you can do VR without making people sick, and your competitor can't, I'd say.D. Lister - Wednesday, April 6, 2016 - link
"Maybe use a GPU that supports async compute ;)"Maybe they looked, but couldn't find one with reliable enough drivers. ;)
dragonsqrrl - Tuesday, April 5, 2016 - link
So it looks like GP100 does FP64 at 1/2 FP32, exciting. No wonder it's such a massive die even on 16nm. That's going to be a huge performance boost for double precision workloads over GK110.psychobriggsy - Tuesday, April 5, 2016 - link
Right, unless he does something after this cars segment, we have some news.No real Pascal hardware shown. Just an implication that some partners have engineering samples of a very high end card targeting everything (DP, SP, HP, DL) costing $10k+ a card.
Not that P100 is what any of us consumers would be interested in for gaming, so GP104? GP106? No, not shown.
dragonsqrrl - Tuesday, April 5, 2016 - link
"Not that P100 is what any of us consumers would be interested in for gaming"I don't think that's true. The popularity of the 980Ti, 780/Ti, 580/570, and Titan's seem to suggest otherwise.
"so GP104? GP106? No, not shown."
Not surprising. x104/106 GPUs typically aren't announced at GTC.
Yojimbo - Tuesday, April 5, 2016 - link
True. But GTC isn't for consumers, it's for CUDA developers.Jay77 - Tuesday, April 5, 2016 - link
Lets hope AMD has a graphics card they want to sell me.SunnyNW - Tuesday, April 5, 2016 - link
*IF the rumor of AMD going with GloFo for 100% of their upcoming GPU die is fact...Anyone think that might have been a mistake, considering GloFo's history of bringing up new nodes/yield problems and the fact that TSM already has 600mm2 GP100 in volume production.I do understand yields are not as a detrimental with such high margin parts(HPC parts), but none the less still seems to be a pretty big accomplishment because the yields still must be somewhat reasonable one would assume.
ppi - Tuesday, April 5, 2016 - link
AMD is probably trying to get GloFo process working, because they have arrangement to buy minimum number of wafers anyway. So going GloFo instead of TSMC means AMD is not paying effectively twice for their wafers production.anubis44 - Wednesday, April 6, 2016 - link
GloFo is using Samsung 14nm process. I wouldn't be so worried about problems because of this.Pinn - Tuesday, April 5, 2016 - link
16GB HBM. So 8GB for the non titan?dragonsqrrl - Tuesday, April 5, 2016 - link
It'll probably be more than 8GB. Currently it looks like the GP104 based card will be equipped with 8GB.StrongDC - Tuesday, April 5, 2016 - link
GP100 -> 15.3B transistors, 610mm^2source: https://devblogs.nvidia.com/parallelforall/inside-...
JamesDean17 - Wednesday, April 6, 2016 - link
Hey dude Jen-Hsun how are you ? You always seemed like a cool guy and look how far in the rabbit hole you bought us in :p just kidding a little. What's the fuss about deep learning and A.I. ? Did the machines and A.I. learned their lesson already or do we have to beat them second time around ? lol Now seriously talking could you guys bassicaly put that deep learning A.I. machines at good use and see if they can develop themselves into more powerfull machines ? Like creating a deep learning A.I. Super Inteligent Computer Powerhouse that can basically self develop into becoming more inteligent by itself ? as well as developing more powerfull Nvidia chips and sofware ? Using such an advanced A.I. that could self develop chips and software instead of using humans ? Be careful though as such an advanced inteligent A.I. might corrupt your human minds take in control and spell the end of the humanity.LemmingOverlord - Wednesday, April 6, 2016 - link
Nvidia Woodworks - kitchen cabinets & other furnishingsNvidia Legworks - the courier service
Nvidia Wetworks - ...
HalloweenJack - Wednesday, April 6, 2016 - link
done a lot of reading into this and TBH I'm not that impressed at all - its very FP16 and FP64 centric , and 10.6 tflops? furyx is 8.6 already - single precision ; the extra transistors have gone into double precision - but FP32 (which is important to gamers ) takes a knock. Its reminds me of the AMD cpu`s with its dual FP16 into FP32.this isn't a card for gamers. Deep learning for sure , but GP1xx for pc will be a different beast.
also - the 5 miracle breakthroughs - we already have 16nm products on the market and interposer-connected memory? Fury from last year would like to ask you about that!
no wonder he looked so nervous with those blatant lies!
dragonsqrrl - Wednesday, April 6, 2016 - link
Yes, because when comparing AMD and Nvidia GPUs, we all know there's a direct correlation between theoretical floating point performance and real world performance in games...Qwertilot - Thursday, April 7, 2016 - link
There seems to be some question (on the forum at least) about whether this one could actually run games at all. Its definitely at least very heavily specalised towards the FP16/64 stuff and they could have instituted that split the writers here were wondering about last year.There'll definitely be a really big gamer oriented card at some point of course, we'll have to wait and see whether its an eventual cut down version of this or some more specalised variant.
vortmax2 - Thursday, April 7, 2016 - link
There's way more money to be had in the FP16/64 segment than in the game segment. From here on out, gamers will be taking a back seat...just business, nothing personal.beck2050 - Tuesday, April 26, 2016 - link
The corporate audio visual market is estimated to be above 100 billion in 2017. Nvidia is very well positioned. I remember how many idiot AMD fanbois were sneering at CUDA and calling it DOA.Whatever these guys do, the opposite is the right.