This doesn't look well for amd. it really does not. Good on them for i guess keeping up with multi gpu stuff. But the markets are showing that dual gpu love is going down. This isn't going to be popular, expecially when the cards are already overpriced at the moment. This is good for miners(gaming too) but everyone with one gpu or even two on the other team isnt going to say "omg amd is making strides in multi gpu stuff." just buy a 1080ti kingpin and save 300-400 dollars instead. this is just stupidity.
Are you on drugs? AMD and Nvidia alike are shying away from mGPU support. AMD added it as an "extra" for those who really want it (and have the money). It's not going to affect 99% of users one way or the other. It makes no difference.
I don't see a problem with it. Those numbers are good, you get almost perfect scaling, you pay twice, you get twice the performance. If you need more performance, then that's a good deal. There are some very high res monitors out there, which could use multi gpu to sustain better framerates.
A 1080ti is about 35% better than a vega. So it will not come anywhere near the performance level of 2 vegas in xfire. One could get 2x 1080ti too, if one needs and can afford it. Although tests reveal that it doesn't scale anywhere near to what amd claims on that slide. But then again, that slide most likely presents cherry picked titles and results. Time will tell.
High graphics cards prices are not really amd's fault, it has always been nvidia that has historically pushed prices up, while reducing the improvement every new generation provides. And of course, the damned miners, thanks to who retailers jack prices up. Money that they pocket rather than amd.
Almost perfect scaling is the exception rather than the norm. Percentage gains realized in adding a second GPU just don't reach double or even very close to double. To my knowledge, they never have even in the days of SLI Voodoo 2s. Sure there are still arguments in favor of adding another graphics card, but it's not realistic to fight in favor of mGPU setups when developers, AMD, and NVIDIA are all acknowledging we're in mGPU's sunset days. That's a lot of industry momentum for a dwindling number of people to resist.
We'll need independent reviews to check out the frame times on these reviews in order to see what the performance is like.
I suspect it will be comparable to the scaling we get on the RX 480 and existing AMD GPUs.
It won't be a miracle, but it will help. Of course an Nvidia person can also point to the fact that they can get a GTX 1070 or 1080 SLI. With current prices, even 1080Ti SLI might be an option.
Considering we just had a new Threadripper motherboard article where multiple GPU configuration is mentioned the de-emphasis by AMD and Nvidia is interesting. In other words all these slots with high-bandwidth are going to be sitting mostly empty.
NVidia should use their NVLink to connect chips on the same board. That should be enough bandwidth to make them look like a single GPU for the software, completely bypassing the ugly problems of current multi GPU approaches. Not sure what else would be required inside the chip, but AMD could probably use their infinity fabric for that too.
And if that works, bring back the bridge to physically distribute the cards (better cooling & power delivery). Could be optical links in a few years.
You are ~all making a possibly correct, but, contrary to evidence, assumption.
"there is no mention of CrossFire terminology in the press release or driver notes. Rather, the technology is always refered to as "multi-GPU". While the exact mGPU limitations of Vega weren’t detailed, AMD appears to specify that only dual RX Vega56 or dual RX Vega64 configurations are officially supported, where in the past different configurations of the same GPU were officially compatible in CrossFire."
This could well refer to dual gpuS linked by fabric. There have been rumors of a dual vega gpu card from asus, and it could be this the driver relates to.
Think about it. Of course AMD is shot of crossfire. Solving those horrid crosssfire/sli coherency issues is exactly what fabric is all about. Nobody is saying multiple fabric linked gpuS cant be done.
Honestly, the days of AFR are at an end, which was the low-hanging fruit Crossfire/SLI took advantage of. Rendering techniques today reuse a lot of information from the previous frame in order to drive visual quality up without doing every computation over again each frame; AFR shines when frames are independent -- inter-frame dependencies cause the workload to serialize and performance scaling to collapse to near-zero pretty quickly. Especially combined with other effects, like greater frame jitter and that AFR, even if you assume perfect scaling, doesn't decrease frame latency one bit, it only increases the number of frames that you see, a frame-rate improvement of less than 25% or so really isn't worth it, IMO. Spending money on a better single GPU can give you 25% pretty easily, with quicker, more-consistent frame times. That's why the recommendation has always been that dual-gpu should really only be used with the highest-end cards.
This (and multi-threading) is also why new graphics APIs expose more-grainular synchronization primitives, and both explicit and implicit multi-gpu modes. Explicit-mode is extremely low-level, to the point that even a discreet AMD GPU can be leveraged together with an Intel integrated GPU, balancing tasks explicitly. Implicit ("linked") mode requires identical GPUs, but can automate a lot more of the details because it can assume identical behavior and even exchange internal-facing (non-API) data formats without conversion (say, hierarchical Z buffers, or proprietary compression, or the kinds of things that only the hardware engineers and driver developers would know about) -- it lessens the burden of full generality and also opens up opportunities to cooperate at that even lower level.
What we don't quite have yet is a way for the hardware/driver to combine multiple GPUs and just present them as a single, big GPU. But that's the Holy Grail, and where the manufacturers are headed very soon (1-2 GPU generations). NvLink and infinity fabric lay the groundwork. Nvidia's big GPGPU Volta (the one with 1/2 rate double precision) is already right up against lithography aperture limits -- they literally cannot build a GPU with more compute units without a process shrink, and they published a paper (patent?) About multi-die GPUs. AMD is obviously reaping the rewards of the multi-die approach on the CPU side using infinity fabric, and they've already got Infinity fabric at work in Vega's high-bandwidth cache controller (That's why it's got a huge virtual memory space, and why that 8gb of HBM is more like a massive L3 victim cache than traditional VRAM.) -- you'll see that leveraged by onboard SSDs in Vega-based GPUs designed for video production (they've already got 2 generations of products that do this without infinity fabric), but it'll be a boon for any GPU workload with really big data sets -- oil and gas, cinematic rendering, certain kinds of big-science problems; might see gobs of traditional DRAM too, one day. Multi-die GPUs probably represent the next leap in GPU advancement, if for no other reason than that silicon process scaling is no longer able to keep pace with how quickly engineers can scale the architecture up. Process will still influence the size of the building blocks, power consumption, and cooling requirements, but multi-die frees engineers of aperture limits and untennable yields.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
11 Comments
Back to Article
austinsguitar - Thursday, September 21, 2017 - link
This doesn't look well for amd. it really does not. Good on them for i guess keeping up with multi gpu stuff. But the markets are showing that dual gpu love is going down. This isn't going to be popular, expecially when the cards are already overpriced at the moment. This is good for miners(gaming too) but everyone with one gpu or even two on the other team isnt going to say "omg amd is making strides in multi gpu stuff." just buy a 1080ti kingpin and save 300-400 dollars instead. this is just stupidity.Alexvrb - Thursday, September 21, 2017 - link
Are you on drugs? AMD and Nvidia alike are shying away from mGPU support. AMD added it as an "extra" for those who really want it (and have the money). It's not going to affect 99% of users one way or the other. It makes no difference.ddriver - Thursday, September 21, 2017 - link
I don't see a problem with it. Those numbers are good, you get almost perfect scaling, you pay twice, you get twice the performance. If you need more performance, then that's a good deal. There are some very high res monitors out there, which could use multi gpu to sustain better framerates.A 1080ti is about 35% better than a vega. So it will not come anywhere near the performance level of 2 vegas in xfire. One could get 2x 1080ti too, if one needs and can afford it. Although tests reveal that it doesn't scale anywhere near to what amd claims on that slide. But then again, that slide most likely presents cherry picked titles and results. Time will tell.
High graphics cards prices are not really amd's fault, it has always been nvidia that has historically pushed prices up, while reducing the improvement every new generation provides. And of course, the damned miners, thanks to who retailers jack prices up. Money that they pocket rather than amd.
BrokenCrayons - Friday, September 22, 2017 - link
Almost perfect scaling is the exception rather than the norm. Percentage gains realized in adding a second GPU just don't reach double or even very close to double. To my knowledge, they never have even in the days of SLI Voodoo 2s. Sure there are still arguments in favor of adding another graphics card, but it's not realistic to fight in favor of mGPU setups when developers, AMD, and NVIDIA are all acknowledging we're in mGPU's sunset days. That's a lot of industry momentum for a dwindling number of people to resist.CrazyElf - Thursday, September 21, 2017 - link
We'll need independent reviews to check out the frame times on these reviews in order to see what the performance is like.I suspect it will be comparable to the scaling we get on the RX 480 and existing AMD GPUs.
It won't be a miracle, but it will help. Of course an Nvidia person can also point to the fact that they can get a GTX 1070 or 1080 SLI. With current prices, even 1080Ti SLI might be an option.
Threska - Friday, September 22, 2017 - link
Considering we just had a new Threadripper motherboard article where multiple GPU configuration is mentioned the de-emphasis by AMD and Nvidia is interesting. In other words all these slots with high-bandwidth are going to be sitting mostly empty.Egg - Friday, September 22, 2017 - link
Those are intended for workstation use, which don't use this mGPU support.MrSpadge - Friday, September 22, 2017 - link
NVidia should use their NVLink to connect chips on the same board. That should be enough bandwidth to make them look like a single GPU for the software, completely bypassing the ugly problems of current multi GPU approaches. Not sure what else would be required inside the chip, but AMD could probably use their infinity fabric for that too.And if that works, bring back the bridge to physically distribute the cards (better cooling & power delivery). Could be optical links in a few years.
msroadkill612 - Saturday, September 23, 2017 - link
You are ~all making a possibly correct, but, contrary to evidence, assumption."there is no mention of CrossFire terminology in the press release or driver notes. Rather, the technology is always refered to as "multi-GPU". While the exact mGPU limitations of Vega weren’t detailed, AMD appears to specify that only dual RX Vega56 or dual RX Vega64 configurations are officially supported, where in the past different configurations of the same GPU were officially compatible in CrossFire."
This could well refer to dual gpuS linked by fabric. There have been rumors of a dual vega gpu card from asus, and it could be this the driver relates to.
Think about it. Of course AMD is shot of crossfire. Solving those horrid crosssfire/sli coherency issues is exactly what fabric is all about. Nobody is saying multiple fabric linked gpuS cant be done.
Ro_Ja - Saturday, September 23, 2017 - link
Still waiting for the full panel scaling not working. Fix it AMD! Even Intel does better job at that!ravyne - Monday, September 25, 2017 - link
Honestly, the days of AFR are at an end, which was the low-hanging fruit Crossfire/SLI took advantage of. Rendering techniques today reuse a lot of information from the previous frame in order to drive visual quality up without doing every computation over again each frame; AFR shines when frames are independent -- inter-frame dependencies cause the workload to serialize and performance scaling to collapse to near-zero pretty quickly. Especially combined with other effects, like greater frame jitter and that AFR, even if you assume perfect scaling, doesn't decrease frame latency one bit, it only increases the number of frames that you see, a frame-rate improvement of less than 25% or so really isn't worth it, IMO. Spending money on a better single GPU can give you 25% pretty easily, with quicker, more-consistent frame times. That's why the recommendation has always been that dual-gpu should really only be used with the highest-end cards.This (and multi-threading) is also why new graphics APIs expose more-grainular synchronization primitives, and both explicit and implicit multi-gpu modes. Explicit-mode is extremely low-level, to the point that even a discreet AMD GPU can be leveraged together with an Intel integrated GPU, balancing tasks explicitly. Implicit ("linked") mode requires identical GPUs, but can automate a lot more of the details because it can assume identical behavior and even exchange internal-facing (non-API) data formats without conversion (say, hierarchical Z buffers, or proprietary compression, or the kinds of things that only the hardware engineers and driver developers would know about) -- it lessens the burden of full generality and also opens up opportunities to cooperate at that even lower level.
What we don't quite have yet is a way for the hardware/driver to combine multiple GPUs and just present them as a single, big GPU. But that's the Holy Grail, and where the manufacturers are headed very soon (1-2 GPU generations). NvLink and infinity fabric lay the groundwork. Nvidia's big GPGPU Volta (the one with 1/2 rate double precision) is already right up against lithography aperture limits -- they literally cannot build a GPU with more compute units without a process shrink, and they published a paper (patent?) About multi-die GPUs. AMD is obviously reaping the rewards of the multi-die approach on the CPU side using infinity fabric, and they've already got Infinity fabric at work in Vega's high-bandwidth cache controller (That's why it's got a huge virtual memory space, and why that 8gb of HBM is more like a massive L3 victim cache than traditional VRAM.) -- you'll see that leveraged by onboard SSDs in Vega-based GPUs designed for video production (they've already got 2 generations of products that do this without infinity fabric), but it'll be a boon for any GPU workload with really big data sets -- oil and gas, cinematic rendering, certain kinds of big-science problems; might see gobs of traditional DRAM too, one day. Multi-die GPUs probably represent the next leap in GPU advancement, if for no other reason than that silicon process scaling is no longer able to keep pace with how quickly engineers can scale the architecture up. Process will still influence the size of the building blocks, power consumption, and cooling requirements, but multi-die frees engineers of aperture limits and untennable yields.