guys I don't think Samus was referring to power consumption but rather in terms of utility and benefit; with a PC you stay warm and do awesome shit. Y'all just need to chill.
How so? They'd both be pretty close to 100% efficient at converting electrical energy to heat (with some minor "losses" due to sound and light energy).
The post originally talked about 525W, the maximum power draw available through PCIe+3 8-pin connectors. The post did state that this was unconfirmed. After AMD fleshed out the specs, the post was updated, but this was never noted anywhere.
Exactly. Anybody who drops $1500 on this right now IMO is out of their minds. Particularly with the fact that NVidia is getting within a few months of Pascal, which will likely have 8gb per card, and you could SLI 2 of the x80 variants for ~$1100 (based on usual pricing at launch of top end card), and likely get significantly better performance, double the VRAM, and my guess using probably 60-80% off the power of this card.
This is probably why they're aiming it at VR developers. 4GB of RAM per eye is more than enough - we've yet to see any VR headsets even reaching total resolutions of 4K, let alone per eye. Most seem to run around QHD or something similar, in which case 4GB is plenty. And given proper support for Affinity Multi-GPU, you can halve those requirements per eye. In other words, this should be very future proof for VR development.
Also, given promised "semi-FirePro" level driver validation and support, this should be a far better solution than CFX/SLI-ing a couple of cheaper GPUs together. After all, professional GPUs are EXPENSIVE: $2000 gets you a Quadro M5000, which is essentially a GTX 980 with better driver support and ECC memory, and while dual 980s in SLI would only cost you $1000, driver validation for necessary applications would be nonexistent.
I still use a GTX 780Ti with 3GB of RAM, runs anything maxed out at 2560x1440...and it's almost what, 4 years old? I don't see 4GB being a limiting factor in this cards lifetime because 4K monitors won't be a popular thing in this cards' lifetime. Keep in mind 95% of desktop monitors (non tablet, non laptop, non television) monitors on the market are still 1080p or less, and 1366x768 is still common in mobile!
4K won't be a focus until the next video game console generation comes out...3-4 years.
It's been less than 2.5 years since the launch of the 780 ti. 4 years is a "minor" exaggeration. Although you could argue that it is based on a 4 year old architecture.
it's not 4 gig of vram it's 8 gig as it's a dual gpu and that was how i read it in pcworld.com and this wil be a kickarse gpu with HBM memory but it will be better to wait till the end of the year or the second run of gpus as any kinks will be ironed out hey just my theory.
There are a few models of 980 Ti that have an extra power connector, I think the MSI Lightning for example. And DX12 and Mantle can combine the VRAM for an effective 8 GB!
Not really ... Sure the new API's allow the developers to address the VRAM directly, but pretty much all of the assets in memory will need to be in both pools anyways .. so maybe get 5-6GB effective, MAYBE, but definitely not anything close to 8GB.
I can see in the images that this card has 2 seperate radiators for cooling? One the one hand extra versatility on the other 2 pumps and twice the potention noise and a slapped together solution.
Oh, god, really, they are launching it as a PRO product to really try and milk every last cent for this thing, wow. Whoever decided that they should "Wait to release this card until VR is ready" is freaking dumb, they should have launched this card 6 months ago when it was actually relevant. All of that coming from a big fan of AMD GPU's.
I have no clue why anyone would buy this. VR at this time is specifically not ready for Multi-GPU. 8GB cards are pushing 4GB out of the way and whether or not that's great, on a $1500 monster I sure would like to see larger than 4GB.
And what is VR specific here, short of the name? What specifically was done to make this a "VR" card?
One GPU for each eye rendering is pretty VR-centric (As an Oculus DK2 owner I'd love that as VR Rendering is far more taxing than the usual 1080p screen rendering)
I would say that 2x Fury X's would be fine for everyone else except the small number of people who have only one PCIe slot and could only fit one FULL LENGTH card into their system.
This is AMD trying to get a decent Return On Investment on something, anything and by golly they deserve it. For years AMD has been fighting with nVidia which has SERIOUSLY hurt their profit margins and minimised how much nVidia could price gouge the shit out of the GPU market, $1000 for a Titan?!? and you complain that AMD is charging $1499 for a dual watercooled GPU featuring HBM @ 2x 4096bit memory bandwidth!!!! Gimmie a break! AMD just can't win, can they?
They probably can win, I don't know. Last time I really saw them "win" I bought a TBird 1.4GHz CPU. But that wasn't my point.
The one GPU per eye is not at all a given yet, that is to say, it's not available, so my question is relevant. They allegedly have developed custom software that lets the 2nd GPU render the 1st GPUs output on an outside monitor, so that's good for VR production. That's the ONLY use case I've seen so far that's actually speaking to this product and relevant.
"The one GPU per eye is not at all a given yet, that is to say, it's not available" It is part of the LiquidVR API. So it is available. The steam VR performance tester supports this.
Exactly, and doubly so that even if it was a brand new feature of this card it would prove it was a card for VR, the fact is the feature will work for us peasants who can 'only' afford 2x Fury X's in Crossfire and not just this one card making it an even more relevant feature in the long run.
I have been really disappointed with the VR Gaming performance of my rig using my DK2 even though I have an intel Core i7 4790K @4GHz, 16GB of G-Skill DDR3 RAM @ 2400MHz with a Powercolor PCS+ OC R9 290X 4GB Graphics card. This rig plays all my games just fine in 1080p on a screen but struggles with Project Cars in VR. I have considered Crossfire as a solution but was waiting until the next gen cards like Pascal and Polaris came out hoping that they would be more energy efficient meaning I wouldn't need a 1500W+ PSU.
The consumer Rift and HTC Vive have 2x 1920x1200 screens @ 90Hz in them, the DK2 has one 1080p screen running at 75Hz and my system struggles to render smoothly. I think there are going to be a whole lot of people disappointed with their shiny new consumer VR headset if they have a comparable rig to mine, goodness knows how they will feel if they have lower specs...
Except the Titan Black came out 3 month early, has 3x as much RAM, was faster, and approx. $200-300 more. So, while I'm not arguing to say an $1100 video card is a good value, comparatively speaking, to get that much performance that much earlier than a Fiji (again assuming you're going to spend that much in the first place) was probably worth it.
Never the less, I do take some issue with the idea that NVidia is "price gouging". If they were charging or trying to charge say $800 for a 980 (not a Ti or Titan) then yes, that would be price gouging, but when you look at inflation, and the fact that the x80 series cards were releasing around the $500-550 mark for almost a decade now, they've actually gotten cheaper.
For example, you can see in the past with the 280's, they *did* try to price gouge, they wanted $649 for a 280, but dropped it to $500 within a few days of release because of competition from AMD.
So I'm not saying NVidia hasn't done it before, but they really haven't done any serious level of price gouging for a good long while now.
Now, if AMD doesn't continue to offer good competition and basically gives them a monopoly of "nobody else can make a good product", then, my guess is we will see a lot of that.
The only spec I question on your predicted table here is the FP64 performance. As a prosumer marketed card, it would make more sense for it to land somewhere between the FirePro (1:4+) and Radeon cards (1:16).
The 295x2 had 1:8, which seems like a nice number here. Then again, Nvidia neutered the Titan cards after the Titan Black, so maybe AMD's similarly trying to preserver their FirePro price tags.
I believe it has to do with hardware limitations (in both cases, for AMD and Nvidia). That is for sure the case with the TItan X due to it utilizing the Maxwell architecture.
Fiji and Maxwell have neutered FP64 performance because they got stuck on 28nm, and they focused all of the die space on stuff relevant for gaming (FP32 compute), as opposed to FP64 capable stuff that would really only be used in certain compute loads.
According to the latest AMD roadmap slide, it appears that HBM2 won't come until Vega in 2017. Hynix is behind schedule with their HBM2 memory. Samsung on the other hand, is in production now so Nvidia has no such issues. Will be interesting to see if Nvidia does release their HBM2 Pascal card in May or wait until next year since AMD won't have anything ready until then.
One of the most interesting bits of inofrmation to come out of this event, at least for me, was the brief mention of the Navi architecture on AMDs roadmap. Especially the text showing that it will feature "nexgen memory." I wonder if that was a typo or if they are indeed calling it "nexgen." I am very curious as to which type of memory architecture could/would be utilized on a future GPU. I will admit I have not been keeping up on future memory technologies pertaining to GPUs so it makes me very curious. Any suggestions of speculation of which next-gen memory this could be?
This would have been more interesting if they based it on the Nano. 85%-90% the performance of the Fury X, but 2/3 the power (175W vs 275W for a single GPU).
Single board takes less space than two in CrossfireX, a single board can also use the entire PCIx16 lane at full speed, unlike two boards and a single board includes liquid cooling in this case, which means it's way way more quiet during peak performance.
Also, the value of the included driver validation and support is WAY beyond $200. After all, that's what you pay for in professional GPUs - the guarantee that they will work, error-free, in critical applications. There's a reason why a GTX 980 is $400 while the Quadro M5000 (a 980 with ECC memory and pro-level drivers) is $2000. This would be a steal for any developer usually working on pro level hardware.
"While I had initially expected AMD to target the card at the VR consumer market, AMD has gone in a different direction. "
This is no surprise at all: VR multi-GPU is something that must be explicitly implemented and optimised for by game developers. Thus far, there are NO applications that have done so outside of the GPU vendors' demo samples. If AMD were to sell this card for VR, they would effectively be selling a card that AT BEST performed the same as a Fuxy X, and at worst performed far worse if somebody tried to force multi-GPU at the driver in the same way multi-GPU works today.
this would cost a lot more where i live Australia and we get ripped off big time with tech prices.Just a point was looking at Best Buy in the states an Acer predator was $1999 here was between $3350 $3800 so i don't see this going for much under $2500 or so.
I think they are right to position this for developers as there really is no point in selling dual GPUs right now. Developers could use all the power money can buy at anytime and develop content for the next generation and lower end GPUs.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
56 Comments
Back to Article
Shadow7037932 - Monday, March 14, 2016 - link
Wow. That's like a small space heater...Ammaross - Monday, March 14, 2016 - link
Two of these in XFire would be the equivalent wattage of a midsize space heater for sure. ;)Praze - Monday, March 14, 2016 - link
Funny thing is, a high-end PC is actually more efficient at heating a room than a space heater (as per Puget Systems)Samus - Tuesday, March 15, 2016 - link
True at least you get something out of it other than just heat :)romulus3 - Wednesday, March 23, 2016 - link
guys I don't think Samus was referring to power consumption but rather in terms of utility and benefit; with a PC you stay warm and do awesome shit. Y'all just need to chill.Gigaplex - Tuesday, March 15, 2016 - link
How so? They'd both be pretty close to 100% efficient at converting electrical energy to heat (with some minor "losses" due to sound and light energy).close - Tuesday, March 15, 2016 - link
Some space heaters rely solely on convection. A PC will also use fans.extide - Wednesday, March 16, 2016 - link
Using fans would make it less efficient at converting energy into heat, actually.Oxford Guy - Monday, March 14, 2016 - link
thrillingmedi03 - Tuesday, March 15, 2016 - link
At 350W? Seriously?Valantar - Tuesday, March 15, 2016 - link
The post originally talked about 525W, the maximum power draw available through PCIe+3 8-pin connectors. The post did state that this was unconfirmed. After AMD fleshed out the specs, the post was updated, but this was never noted anywhere.Wreckage - Monday, March 14, 2016 - link
24 pins for power! I bet the lights dim when you turn the system on. A shame it's crippled by the 4GB memory limit.looncraz - Monday, March 14, 2016 - link
4GB VRAM is not yet a limiting factor for gaming. The 970 can get away with 3.5, can't it?russomd - Monday, March 14, 2016 - link
At $1500 it's a future investment and 4 gig's isn't enough for 4K games maxed out. The 4 gigs of ram is a bad point about the card.Kutark - Monday, March 14, 2016 - link
Exactly. Anybody who drops $1500 on this right now IMO is out of their minds. Particularly with the fact that NVidia is getting within a few months of Pascal, which will likely have 8gb per card, and you could SLI 2 of the x80 variants for ~$1100 (based on usual pricing at launch of top end card), and likely get significantly better performance, double the VRAM, and my guess using probably 60-80% off the power of this card.Kutark - Monday, March 14, 2016 - link
I meant so say 60-80% OF, not off.Valantar - Tuesday, March 15, 2016 - link
This is probably why they're aiming it at VR developers. 4GB of RAM per eye is more than enough - we've yet to see any VR headsets even reaching total resolutions of 4K, let alone per eye. Most seem to run around QHD or something similar, in which case 4GB is plenty. And given proper support for Affinity Multi-GPU, you can halve those requirements per eye. In other words, this should be very future proof for VR development.Also, given promised "semi-FirePro" level driver validation and support, this should be a far better solution than CFX/SLI-ing a couple of cheaper GPUs together. After all, professional GPUs are EXPENSIVE: $2000 gets you a Quadro M5000, which is essentially a GTX 980 with better driver support and ECC memory, and while dual 980s in SLI would only cost you $1000, driver validation for necessary applications would be nonexistent.
mobutu - Tuesday, March 15, 2016 - link
"future investment"?with obviously superior tech as pascal and polaris coming in half a year or so.
close - Tuesday, March 15, 2016 - link
Is 8GB enough? Maybe I'm missing something with your count but this thing has 2 GPUs with 4GB each.toxicfiend1957 - Tuesday, March 15, 2016 - link
8 gig of vram not 4 dual gpu 2 x 4096 so 8 gig not 4.Samus - Tuesday, March 15, 2016 - link
I still use a GTX 780Ti with 3GB of RAM, runs anything maxed out at 2560x1440...and it's almost what, 4 years old? I don't see 4GB being a limiting factor in this cards lifetime because 4K monitors won't be a popular thing in this cards' lifetime. Keep in mind 95% of desktop monitors (non tablet, non laptop, non television) monitors on the market are still 1080p or less, and 1366x768 is still common in mobile!4K won't be a focus until the next video game console generation comes out...3-4 years.
Le Geek - Tuesday, March 15, 2016 - link
It's been less than 2.5 years since the launch of the 780 ti. 4 years is a "minor" exaggeration. Although you could argue that it is based on a 4 year old architecture.nagi603 - Tuesday, March 15, 2016 - link
It depends on the game and settings, but yes, in some cases, it is a limiting factor. GTA V can eat that up without going ultra at 1080p.toxicfiend1957 - Tuesday, March 15, 2016 - link
it's not 4 gig of vram it's 8 gig as it's a dual gpu and that was how i read it in pcworld.com and this wil be a kickarse gpu with HBM memory but it will be better to wait till the end of the year or the second run of gpus as any kinks will be ironed out hey just my theory.extide - Thursday, March 17, 2016 - link
Yeah but it's still only 4GB per GPU, so it's really a 4GB card because you can't add together the VRAM pools.jussnf - Wednesday, March 16, 2016 - link
There are a few models of 980 Ti that have an extra power connector, I think the MSI Lightning for example. And DX12 and Mantle can combine the VRAM for an effective 8 GB!extide - Thursday, March 17, 2016 - link
Not really ... Sure the new API's allow the developers to address the VRAM directly, but pretty much all of the assets in memory will need to be in both pools anyways .. so maybe get 5-6GB effective, MAYBE, but definitely not anything close to 8GB.QinX - Monday, March 14, 2016 - link
I can see in the images that this card has 2 seperate radiators for cooling? One the one hand extra versatility on the other 2 pumps and twice the potention noise and a slapped together solution.QinX - Monday, March 14, 2016 - link
Nevermind I now see it's some good tube management that has deceived my eyes!extide - Monday, March 14, 2016 - link
Oh, god, really, they are launching it as a PRO product to really try and milk every last cent for this thing, wow. Whoever decided that they should "Wait to release this card until VR is ready" is freaking dumb, they should have launched this card 6 months ago when it was actually relevant. All of that coming from a big fan of AMD GPU's.Shadow7037932 - Monday, March 14, 2016 - link
Yeah, with Polaris and Pascal coming Soon™ I don't see this being relevant for very long.hansmuff - Monday, March 14, 2016 - link
I have no clue why anyone would buy this. VR at this time is specifically not ready for Multi-GPU. 8GB cards are pushing 4GB out of the way and whether or not that's great, on a $1500 monster I sure would like to see larger than 4GB.And what is VR specific here, short of the name? What specifically was done to make this a "VR" card?
JKay6969AT - Monday, March 14, 2016 - link
One GPU for each eye rendering is pretty VR-centric (As an Oculus DK2 owner I'd love that as VR Rendering is far more taxing than the usual 1080p screen rendering)I would say that 2x Fury X's would be fine for everyone else except the small number of people who have only one PCIe slot and could only fit one FULL LENGTH card into their system.
This is AMD trying to get a decent Return On Investment on something, anything and by golly they deserve it. For years AMD has been fighting with nVidia which has SERIOUSLY hurt their profit margins and minimised how much nVidia could price gouge the shit out of the GPU market, $1000 for a Titan?!? and you complain that AMD is charging $1499 for a dual watercooled GPU featuring HBM @ 2x 4096bit memory bandwidth!!!! Gimmie a break! AMD just can't win, can they?
hansmuff - Monday, March 14, 2016 - link
They probably can win, I don't know. Last time I really saw them "win" I bought a TBird 1.4GHz CPU. But that wasn't my point.The one GPU per eye is not at all a given yet, that is to say, it's not available, so my question is relevant. They allegedly have developed custom software that lets the 2nd GPU render the 1st GPUs output on an outside monitor, so that's good for VR production. That's the ONLY use case I've seen so far that's actually speaking to this product and relevant.
tuxfool - Monday, March 14, 2016 - link
"The one GPU per eye is not at all a given yet, that is to say, it's not available"It is part of the LiquidVR API. So it is available. The steam VR performance tester supports this.
JKay6969AT - Monday, March 14, 2016 - link
Exactly, and doubly so that even if it was a brand new feature of this card it would prove it was a card for VR, the fact is the feature will work for us peasants who can 'only' afford 2x Fury X's in Crossfire and not just this one card making it an even more relevant feature in the long run.I have been really disappointed with the VR Gaming performance of my rig using my DK2 even though I have an intel Core i7 4790K @4GHz, 16GB of G-Skill DDR3 RAM @ 2400MHz with a Powercolor PCS+ OC R9 290X 4GB Graphics card. This rig plays all my games just fine in 1080p on a screen but struggles with Project Cars in VR. I have considered Crossfire as a solution but was waiting until the next gen cards like Pascal and Polaris came out hoping that they would be more energy efficient meaning I wouldn't need a 1500W+ PSU.
The consumer Rift and HTC Vive have 2x 1920x1200 screens @ 90Hz in them, the DK2 has one 1080p screen running at 75Hz and my system struggles to render smoothly. I think there are going to be a whole lot of people disappointed with their shiny new consumer VR headset if they have a comparable rig to mine, goodness knows how they will feel if they have lower specs...
Kutark - Tuesday, March 15, 2016 - link
Except the Titan Black came out 3 month early, has 3x as much RAM, was faster, and approx. $200-300 more. So, while I'm not arguing to say an $1100 video card is a good value, comparatively speaking, to get that much performance that much earlier than a Fiji (again assuming you're going to spend that much in the first place) was probably worth it.Never the less, I do take some issue with the idea that NVidia is "price gouging". If they were charging or trying to charge say $800 for a 980 (not a Ti or Titan) then yes, that would be price gouging, but when you look at inflation, and the fact that the x80 series cards were releasing around the $500-550 mark for almost a decade now, they've actually gotten cheaper.
For example, you can see in the past with the 280's, they *did* try to price gouge, they wanted $649 for a 280, but dropped it to $500 within a few days of release because of competition from AMD.
So I'm not saying NVidia hasn't done it before, but they really haven't done any serious level of price gouging for a good long while now.
Now, if AMD doesn't continue to offer good competition and basically gives them a monopoly of "nobody else can make a good product", then, my guess is we will see a lot of that.
iamkyle - Monday, March 14, 2016 - link
New from Sony - the 4GB HBM Radeon Memory Stick Pro Duo!Praze - Monday, March 14, 2016 - link
The only spec I question on your predicted table here is the FP64 performance. As a prosumer marketed card, it would make more sense for it to land somewhere between the FirePro (1:4+) and Radeon cards (1:16).The 295x2 had 1:8, which seems like a nice number here. Then again, Nvidia neutered the Titan cards after the Titan Black, so maybe AMD's similarly trying to preserver their FirePro price tags.
SunnyNW - Monday, March 14, 2016 - link
I believe it has to do with hardware limitations (in both cases, for AMD and Nvidia). That is for sure the case with the TItan X due to it utilizing the Maxwell architecture.Ryan Smith - Tuesday, March 15, 2016 - link
Correct. Fiji only offers 1/16 FP64 performance, period. AMD put everything into making a maxed out FP32 chip.extide - Wednesday, March 16, 2016 - link
Fiji and Maxwell have neutered FP64 performance because they got stuck on 28nm, and they focused all of the die space on stuff relevant for gaming (FP32 compute), as opposed to FP64 capable stuff that would really only be used in certain compute loads.DigitalFreak - Monday, March 14, 2016 - link
According to the latest AMD roadmap slide, it appears that HBM2 won't come until Vega in 2017. Hynix is behind schedule with their HBM2 memory. Samsung on the other hand, is in production now so Nvidia has no such issues. Will be interesting to see if Nvidia does release their HBM2 Pascal card in May or wait until next year since AMD won't have anything ready until then.DigitalFreak - Monday, March 14, 2016 - link
BTW - Source:http://www.fudzilla.com/news/graphics/40207-amd-re...
SunnyNW - Monday, March 14, 2016 - link
One of the most interesting bits of inofrmation to come out of this event, at least for me, was the brief mention of the Navi architecture on AMDs roadmap. Especially the text showing that it will feature "nexgen memory." I wonder if that was a typo or if they are indeed calling it "nexgen." I am very curious as to which type of memory architecture could/would be utilized on a future GPU. I will admit I have not been keeping up on future memory technologies pertaining to GPUs so it makes me very curious. Any suggestions of speculation of which next-gen memory this could be?eldakka - Monday, March 14, 2016 - link
This would have been more interesting if they based it on the Nano. 85%-90% the performance of the Fury X, but 2/3 the power (175W vs 275W for a single GPU).prtskg - Tuesday, March 15, 2016 - link
I think it'll be based on nano, tdp of 275w says hints at it.themrsbusta - Monday, March 14, 2016 - link
in VR is a Fury X for each eye XDKutark - Monday, March 14, 2016 - link
Sooooo, it's a dual Fiji on a single board for $200 more than you could buy 2 separately and crossfire them...I'm waiting to be impressed.
prtskg - Tuesday, March 15, 2016 - link
Single card is usually costly than 2 cards. Also this is a prosumer ( Radeon pro) card while furyX is consumer only.atlantico - Tuesday, March 15, 2016 - link
Single board takes less space than two in CrossfireX, a single board can also use the entire PCIx16 lane at full speed, unlike two boards and a single board includes liquid cooling in this case, which means it's way way more quiet during peak performance.That's worth way more than 200 bucks.
Valantar - Tuesday, March 15, 2016 - link
Also, the value of the included driver validation and support is WAY beyond $200. After all, that's what you pay for in professional GPUs - the guarantee that they will work, error-free, in critical applications. There's a reason why a GTX 980 is $400 while the Quadro M5000 (a 980 with ECC memory and pro-level drivers) is $2000. This would be a steal for any developer usually working on pro level hardware.edzieba - Tuesday, March 15, 2016 - link
"While I had initially expected AMD to target the card at the VR consumer market, AMD has gone in a different direction. "This is no surprise at all: VR multi-GPU is something that must be explicitly implemented and optimised for by game developers. Thus far, there are NO applications that have done so outside of the GPU vendors' demo samples. If AMD were to sell this card for VR, they would effectively be selling a card that AT BEST performed the same as a Fuxy X, and at worst performed far worse if somebody tried to force multi-GPU at the driver in the same way multi-GPU works today.
toxicfiend1957 - Tuesday, March 15, 2016 - link
this would cost a lot more where i live Australia and we get ripped off big time with tech prices.Just a point was looking at Best Buy in the states an Acer predator was $1999 here was between $3350 $3800 so i don't see this going for much under $2500 or so.Shadowmaster625 - Tuesday, March 15, 2016 - link
Didnt they announce this thing 3 months, 6 months, and 9 months ago?zodiacfml - Tuesday, March 15, 2016 - link
I think they are right to position this for developers as there really is no point in selling dual GPUs right now. Developers could use all the power money can buy at anytime and develop content for the next generation and lower end GPUs.