That was clear for quite some time. nvidia put hbm into much higher margin products, and offered a better price to pay for it, largely depriving amd from access to the memory essential to vega and justifying a price jack.
Interestingly, it's "Limit 5 per customer". I would have expected these to be too expensive for mining :-). Nvidia now sells 3 hideously expensive quadros: - Quadro GV100, 9k - Quadro GP100, 7k - Quadro P6000, 5k
The next down the latter (Quadro P5000) is downright a steal at only 2k :-)
While it's entirely possible to use the GV100 in render farms, I would be surprised if they ended up there. The strength of the hardware is more suited for desktop use, especially if you're going to take advantage of ray tracing.
I read that as signifying the return of high-end entertainment systems to an era of mainframe-esque beasts... So much for the democratization of high end hardware that I have known and cherished for my entire life.
Cloud... 5G...
I'm not at all enthusiastic about the future of computing.
Eh, computing oscillates regularly between "Big server! Thin clients! Moar bandwidth" and "Cheap servers! Fat clients! Low bandwidth!" and has done since inception.
Not really. Your sense of scale is a muddled mess. There's no oscillation. PCs were never more powerful than mainframes. Cell phones were never more powerful than internet (or "cloud") servers.
The trend went from mainframes -> PCs, as the issue was less about data and more about economically scaling processing power. Then, the internet came along and started the growth of datacenters. The rise of smartphones, globalization, and big data only accelerated this trend. Now, we live in an era of not "either or" but "both and". There's computing power both at the edge AND the core, and developers are able to shift the work to wherever it makes the most sense.
But, the reality is that if you need a lot of power in one device, it's going to be expensive. It has always been and always will be thus.
I don't get why you're complaining. The high-end has always been out of reach of mere mortals. No reason to think that wouldn't continue.
It's just that instead of being some IBM multi-core POWER CPU for mainframes, it's now Nvidia that has produced a GPU too big to be sold at a consumer price point. And you're no worse off for them having done so (except for the 12 nm manufacturing capacity being consumed by making it).
Instead of complaining about what you can't have, you can instead focus on what gains await consumers in the next few node shrinks of CPUs and GPUs.
The limit 5 per customer probably has more to do with overall supply shortage of the underlying GPU (GV100) and the HBM2 memory than with singling out any particular type of customer. It's the same GPU that is going into their high-end Tesla accelerators. The Titan-V (which also uses the GV100 and HBM2 memory) is out of stock, so it seems they can't build them fast enough to keep up with demand.
Reading between the lines here and it seems SLI on the Pascal generation was neutered for good reason. Those high bandwidth bridges just didn't work properly OR nVidia knew they would hit a wall with PCIe. NVLink was well under way by that point but if NVLink2 improves on it by being totally transparent from the software side (if what was said is correct) then that gives game developers a HUGE break when it comes to engine support.
I can totally see NVLink2 replacing SLI on future gaming cards, probably under a new name like XLI. All nVidia needs to do is make 3/4 way NVLink bridges available (they may already have them). We all know they want to sell more cards to gamers, so they wouldn't say no to enthusiasts either who want two or more cards in their system, and very high end systems pushing 5K/8K panels.
Your timeline is wrong. The *first* pascal chip had NVLink. They could've included it in the rest, if they'd wanted to. Then, they had another shot with Titan V, as it launched well after the V100-based Tesla products ushered in NVLink2. But Titan V didn't have it (enabled, at least).
So, there's a clear pattern of Nvidia reserving NVLink for its workstation & datacenter products. Don't hold your breath for next gen SLI. I think they probably regard PCIe 3+ as adequate for multi-GPU gaming.
Guys, i know, you love vulgar girls What about online communication with them without limits? Here http://lonaism.ga you can find horny real girls from different countries.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
21 Comments
Back to Article
Hurr Durr - Tuesday, March 27, 2018 - link
>that watermarkhaukionkannel - Tuesday, March 27, 2018 - link
Now we know where all the memory did go ;)austinsguitar - Tuesday, March 27, 2018 - link
^iter - Wednesday, March 28, 2018 - link
That was clear for quite some time. nvidia put hbm into much higher margin products, and offered a better price to pay for it, largely depriving amd from access to the memory essential to vega and justifying a price jack.Praze - Tuesday, March 27, 2018 - link
"I would expect this card to run for north of $5,000"Correct, it's $8,999 at the Nvidia shop right now.
mczak - Tuesday, March 27, 2018 - link
Interestingly, it's "Limit 5 per customer". I would have expected these to be too expensive for mining :-).Nvidia now sells 3 hideously expensive quadros:
- Quadro GV100, 9k
- Quadro GP100, 7k
- Quadro P6000, 5k
The next down the latter (Quadro P5000) is downright a steal at only 2k :-)
Maxed Out - Tuesday, March 27, 2018 - link
Correct, miners are probably not looking at these... but 5 per customer... smells like render farms are being singled out, at least to me.Ryan Smith - Tuesday, March 27, 2018 - link
While it's entirely possible to use the GV100 in render farms, I would be surprised if they ended up there. The strength of the hardware is more suited for desktop use, especially if you're going to take advantage of ray tracing.Xpl1c1t - Tuesday, March 27, 2018 - link
I read that as signifying the return of high-end entertainment systems to an era of mainframe-esque beasts... So much for the democratization of high end hardware that I have known and cherished for my entire life.Cloud...
5G...
I'm not at all enthusiastic about the future of computing.
edzieba - Wednesday, March 28, 2018 - link
Eh, computing oscillates regularly between "Big server! Thin clients! Moar bandwidth" and "Cheap servers! Fat clients! Low bandwidth!" and has done since inception.mode_13h - Monday, April 2, 2018 - link
Not really. Your sense of scale is a muddled mess. There's no oscillation. PCs were never more powerful than mainframes. Cell phones were never more powerful than internet (or "cloud") servers.The trend went from mainframes -> PCs, as the issue was less about data and more about economically scaling processing power. Then, the internet came along and started the growth of datacenters. The rise of smartphones, globalization, and big data only accelerated this trend. Now, we live in an era of not "either or" but "both and". There's computing power both at the edge AND the core, and developers are able to shift the work to wherever it makes the most sense.
But, the reality is that if you need a lot of power in one device, it's going to be expensive. It has always been and always will be thus.
mode_13h - Monday, April 2, 2018 - link
I don't get why you're complaining. The high-end has always been out of reach of mere mortals. No reason to think that wouldn't continue.It's just that instead of being some IBM multi-core POWER CPU for mainframes, it's now Nvidia that has produced a GPU too big to be sold at a consumer price point. And you're no worse off for them having done so (except for the 12 nm manufacturing capacity being consumed by making it).
Instead of complaining about what you can't have, you can instead focus on what gains await consumers in the next few node shrinks of CPUs and GPUs.
Yojimbo - Tuesday, March 27, 2018 - link
The limit 5 per customer probably has more to do with overall supply shortage of the underlying GPU (GV100) and the HBM2 memory than with singling out any particular type of customer. It's the same GPU that is going into their high-end Tesla accelerators. The Titan-V (which also uses the GV100 and HBM2 memory) is out of stock, so it seems they can't build them fast enough to keep up with demand.Samus - Wednesday, March 28, 2018 - link
The ~120TFLOP tensor performance makes this thing quite competitive at 9k when you consider any other alternative would cost more.mode_13h - Monday, April 2, 2018 - link
Not competitive against Titan V.The only real distinguishing factors between the two are memory size and NVLink.
blppt - Tuesday, March 27, 2018 - link
Aha, finally, the GPU capable of running FFXV at 4k 60fps maxed out!Whats that? No game ready drivers? Bah.
Socius - Friday, March 30, 2018 - link
You can already do that with an overclocked Pascal Titan X.juhatus - Wednesday, March 28, 2018 - link
Tensor Performance 118.5 TLFOPsSurely TFLOPs
Luscious - Friday, March 30, 2018 - link
Reading between the lines here and it seems SLI on the Pascal generation was neutered for good reason. Those high bandwidth bridges just didn't work properly OR nVidia knew they would hit a wall with PCIe. NVLink was well under way by that point but if NVLink2 improves on it by being totally transparent from the software side (if what was said is correct) then that gives game developers a HUGE break when it comes to engine support.I can totally see NVLink2 replacing SLI on future gaming cards, probably under a new name like XLI. All nVidia needs to do is make 3/4 way NVLink bridges available (they may already have them). We all know they want to sell more cards to gamers, so they wouldn't say no to enthusiasts either who want two or more cards in their system, and very high end systems pushing 5K/8K panels.
mode_13h - Monday, April 2, 2018 - link
Your timeline is wrong. The *first* pascal chip had NVLink. They could've included it in the rest, if they'd wanted to. Then, they had another shot with Titan V, as it launched well after the V100-based Tesla products ushered in NVLink2. But Titan V didn't have it (enabled, at least).So, there's a clear pattern of Nvidia reserving NVLink for its workstation & datacenter products. Don't hold your breath for next gen SLI. I think they probably regard PCIe 3+ as adequate for multi-GPU gaming.
Ferrynthia - Sunday, April 1, 2018 - link
Guys, i know, you love vulgar girlsWhat about online communication with them without limits? Here http://lonaism.ga you can find horny real girls from different countries.