Makes sense. Gamers a whiny bunch, it's professionals that need RTRT the most. As I have repeatedly said, Nvidia would have been much better if they confined their 1st gen RTRT to Quadro cards. It seems Intel made more sense here (depending on how their implementation works out).
At this point I strongly suspect that both ray tracing and DLSSAA fell significantly short from performing as well as NVidia originally had hoped to achieve. If they knew performance was going to end up at its current levels it would've made a lot more sense to limit it to the 2080 Ti/Titan level cards (with maybe a cheaper heavily cut down bad die off label model as a cheaper option for developers) as a tech demo/developer platform and a promise of much wider availability in the next generation of cards when the 7nm die shrink made fitting enough cores in more feasible.
Of course it doesn't, but their mistake was using Battlefield as a "launch" title.
They should've partnered with a couple smaller, indy developers, to make some little games that really showcase RTX' potential. And then include a copy with every RTX card. Then, while people were waiting for content, gamers would at least know what might be looming just over the horizon.
I disagree. Even with an RTX 2060, you can enable or disable RTX features as a user preference if the title supports it. The implementation of Global Illumination in Metro Exodus is particularly impressive if users want the full atmospheric experience over raw performance.
came here to say this. if all that RT cores gave us was reflections, then yeah that would be disappointing. but it turns out regular shader cores can do raytraced reflections just fine and what dedicated hardware gets us is raytraced global illumination and ambient occlusion which is a pretty big deal. I suggest RTX naysayers try Metro with and then without raytracing enabled (actually playing it not just screenshots or youtube analysis) and they will see what the big deal is.
And no, I'm not shilling for NVidia. If anything i'm a bit of a fangirl for AMD (go underdog!) but NV are the ones who have the hardware for raytraced GI right now and as a cinematographer in real life, I have an eye for lighting that extends into the virtual world. And Raytraced lighting is the real deal, even "half-baked first-gen" raytraced lighting. Can't wait to see how far gfx will go in a few years when the hardware gets beefier and more titles start incorporating rt fx
Honestly, given the improvement ray tracing shows in global illumination I'm thinking they're going to drop accuracy in lieu of speed anyway.
Even considering next gen will likely double the RT hardware.
Realtime *anything* is a sacrifice of quality over speed. It is just a simulation that is profoundly limited compared to our real world physics and lighting. They'll figure out how to be accurate where needed and fudge the numbers further when applicable.
Crytek has already done similar otherwise it couldn't run on regular GPU.
Whatever they're doing, it can't be realtime global illumination - probably something pre-computed, which means it can't react to lighting changes or other environmental changes.
I think gamers often don't appreciate the way that game designers try to work within the constraints of their engine. So, you can have a game that looks good, but perhaps partly because the game designers had one hand tied behind their backs. With those types of constraints loosened, you could see the same or better realism in a more diverse range of environments, settings, lighting, and camera angles.
I rather think they simply didn't have the money to create 2 separate high-end gpus one with and one without RT. To make money from the large dies they needed to jack up prices and that could only be done with features, even if they are in reality not very useful.
I'm not sure about that. I think bifurcating their product line would end up reducing volumes on the RTX-series to the point that volumes would drop and prices would rise even further, which could push them out of the hands of even more consumers. Also, that would make it more expensive for the render farms and AI workloads they're trying to serve, in the cloud. So, the best move is to get the largest group of users to help shoulder the cost.
But, there's another reason not to do it... I think their strategy was to leverage their current market-dominance to force ray tracing hardware into the market. The goal being to raise the bar so that even if AMD and Intel caught up with conventional GPU performance, Nvidia would have an early lead in ray tracing and AI.
Like it or not, they do seem to be succeeding in breaking the typical chicken/egg problem of technological change, in this instance.
I'm sure Nvidia had the money to create two separate high-end GPUs. I think it was simply a business decision to maximize profits not to do so. After all their internal company structure has used for years now the same basic chip design for home and industry uses just "cut down" to tier their product line pricing. I assume any changes there while less visible than say losses from reducing scale of manufacturing are far from consequential.
You might be right on the pricing point, but this was clearly a strategic move to build the installed base of Tensor and RT cores in gaming PCs. That installed base is needed for software developers to user their capabilities, which will give Nvidia a lead that's difficult for AMD and Intel to erode.
Basically, Nvidia is trying to make the GPU race about something more than mere rasterization performance, since it's only a matter of time before Intel and/or AMD eventually catch up on that front.
One thing to consider is that typically Intel will place it newer technology on enterprise components first before more it to main stream. Gamers are very small subset of Intel business
BTW Wccftech had no report of this, but they were fast to blab that Intel will not have 10nm till 2021 or 2022 for desktop and only limited qualities earlier. In truth, people really don't know what is next. Also blab that AMD will be on 5nm before Intel is on 10nm or 7nm.
It must *REALLY* grind your gears that many people aren't dyed-in-the-wool fans of Intel. I think you see it as an affront to your existence or something.
sa666666 come on... leave the intel fanboy fanatic HStewart alone.. its very upset because intel fell asleep again... forgot how to innovate, left us stuck at quad core in the mainstream for what.. 4 or 5 years, only gave us a performance increase over the previous generation by what.. 10% over the previous generation, was the leader in manufacturing, but now isnt... and now, finally woke up, and is scrambling to catch up in almost every thing..... his beloved intel, is no longer the leader in cpu, or process tech... and he doesnt know how to deal, or handle it...
They didn't fall asleep - they spent too much of their profits in dividends and stock buybacks, instead of plowing it back into their manufacturing tech.
More like Intel won't make something available on mainstream parts if they can fuse it off and sell the same part with the fuse unblown for a 10x markup.
The FMX conference is about visual effects. Intel made this announcement letting those visual effects (film industry) customers know that they are planning on releasing GPU-acceleration for render farms. It should not be taken to mean that Intel will not be coming out with ray tracing acceleration for consumer GPUs until later or not at all, since that seems to be out of the context of the announcement. In fact, it's a good guess that since Intel will have ray tracing acceleration capabilities then they will be interested in adding them to their consumer GPUs.
Oh, and don't take the "holistic platform" thing too seriously. All the information Intel releases publicly is heavily influenced by marketing. Intel has a CPU and their main competitor in terms of render farms is NVIDIA, who doesn't have a CPU. So it's probably much like how AMD has been marketing their data center approach as "hey we have GPUs and GPUs so we can combine them" without telling anything specific and significant why that makes any difference. I would certainly not assume that Intel has any superior way of combining resources when using their CPUs with their GPUs in comparison with what could be done with their CPUs and NVIDIA's GPUs, unless, of course, Intel restricts access from their competitors to a faster data pipeline between their CPU and their GPU. I'd imagine that would land them in antitrust trouble extremely quickly, though.
> I would certainly not assume that Intel has any superior way of combining resources when using their CPUs with their GPUs in comparison with what could be done with their CPUs and NVIDIA's GPUs
It's called CXL, and Nvidia isn't (yet) a consortium member (Mellanox now is, but that might not mean anything for Nvidia's GPUs...):
I know about CXL but it was not important to the discussion. The point was any faster data pipeline. Intel most likely came out with CXL because of their GPU and FPGA strategy. They could have allowed it years ago, as there has been a clear market demand for it for a while. But Intel has had such a dominant market share in servers that they didn't really need to worry about lacking that capability.
They cannot restrict it once they have it on their platform, as I said before. NVIDIA doesn't need to be a consortium member in order to use it. It's being delivered as an open industry standard, probably because once Intel decided to go down that road they saw it as advantageous to try to kill off CCIX so they have more control over the situation.
I still wouldn't expect more than minimal support for hardware ray tracing (although it looks like the critical issues for the RTX units involve filtering the ray tracing effects, so presumably more support for that, especially if it includes additional machine learning bits [which have a much higher market anyway]).
Then again, that assumes a non-raytracing graphical need for GPUs in the datacenter. Aside from google wanting to move gaming there, I don't think there's much of a need. Perhaps they will be built entirely around raytracing (with the obvious ability to provide compute for all the non-graphics GPU needs in the datacenter: I'd think these should be higher than the need for raytracing, but can't be sure).
> it looks like the critical issues for the RTX units involve filtering the ray tracing effects
That's just for de-noising global illumination, where they bring their tensor cores to bear on the problem.
Tom's has a pretty good article discussing the different types of ray tracing effects and benchmarking games on various RTX and GTX cards to see how they handle the load. Definitely worth a look!
Whilst I wouldn't say surprising for an Intel graph, the astonishingly meaningless amounts of information in that graph at the top, is quite spectacular. It can essentially be broken down to "hardware gets faster with time, by some unspecified value, and there'll be multiple levels of performance for some hardware"... It says teraflops to petaflops in the title, but the performance axis is unlabeled, so we have no idea where the divide is. And does the circle being bigger mean that the products within that family will range from the top and bottom of the circle radius relative to the performance axis? In the pop-out extending out from the Xe family, do the product categories then correspond to performance within the radius of the circle relative to placement in the stact, or does the pop-out stack itself place the performance on the axis? Or is it all just nonsense anyway with no meaningful placement on the performance axis outside of "faster per year"?
You're over-thinking it. All you're supposed to get from that slide is: * launch in 2020 * full product stack, from integrated + entry-level to enthusiast & data center * (presumably) up to teraflops per GPU, scaling up to petaflops per installation.
That's it. Now calm down and stop trying to treat marketing material as though it's real technical literature. In my experience, it's only other marketing people who seem to have trouble seeing through the smokescreen and spin that pervade industry marketing material.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
31 Comments
Back to Article
bug77 - Wednesday, May 1, 2019 - link
Makes sense. Gamers a whiny bunch, it's professionals that need RTRT the most.As I have repeatedly said, Nvidia would have been much better if they confined their 1st gen RTRT to Quadro cards. It seems Intel made more sense here (depending on how their implementation works out).
DanNeely - Wednesday, May 1, 2019 - link
At this point I strongly suspect that both ray tracing and DLSSAA fell significantly short from performing as well as NVidia originally had hoped to achieve. If they knew performance was going to end up at its current levels it would've made a lot more sense to limit it to the 2080 Ti/Titan level cards (with maybe a cheaper heavily cut down bad die off label model as a cheaper option for developers) as a tech demo/developer platform and a promise of much wider availability in the next generation of cards when the 7nm die shrink made fitting enough cores in more feasible.Opencg - Wednesday, May 1, 2019 - link
Im not sure how they thought that 1 trace per pixel at 60fps 1080p was ever going to look good.willis936 - Wednesday, May 1, 2019 - link
That sounds like plenty for asteroids :Dmode_13h - Wednesday, May 1, 2019 - link
Of course it doesn't, but their mistake was using Battlefield as a "launch" title.They should've partnered with a couple smaller, indy developers, to make some little games that really showcase RTX' potential. And then include a copy with every RTX card. Then, while people were waiting for content, gamers would at least know what might be looming just over the horizon.
TEAMSWITCHER - Wednesday, May 1, 2019 - link
I disagree. Even with an RTX 2060, you can enable or disable RTX features as a user preference if the title supports it. The implementation of Global Illumination in Metro Exodus is particularly impressive if users want the full atmospheric experience over raw performance.KateH - Wednesday, May 1, 2019 - link
came here to say this.if all that RT cores gave us was reflections, then yeah that would be disappointing. but it turns out regular shader cores can do raytraced reflections just fine and what dedicated hardware gets us is raytraced global illumination and ambient occlusion which is a pretty big deal. I suggest RTX naysayers try Metro with and then without raytracing enabled (actually playing it not just screenshots or youtube analysis) and they will see what the big deal is.
And no, I'm not shilling for NVidia. If anything i'm a bit of a fangirl for AMD (go underdog!) but NV are the ones who have the hardware for raytraced GI right now and as a cinematographer in real life, I have an eye for lighting that extends into the virtual world. And Raytraced lighting is the real deal, even "half-baked first-gen" raytraced lighting. Can't wait to see how far gfx will go in a few years when the hardware gets beefier and more titles start incorporating rt fx
0ldman79 - Wednesday, May 1, 2019 - link
Honestly, given the improvement ray tracing shows in global illumination I'm thinking they're going to drop accuracy in lieu of speed anyway.Even considering next gen will likely double the RT hardware.
Realtime *anything* is a sacrifice of quality over speed. It is just a simulation that is profoundly limited compared to our real world physics and lighting.
They'll figure out how to be accurate where needed and fudge the numbers further when applicable.
Crytek has already done similar otherwise it couldn't run on regular GPU.
mode_13h - Wednesday, May 1, 2019 - link
Whatever they're doing, it can't be realtime global illumination - probably something pre-computed, which means it can't react to lighting changes or other environmental changes.I think gamers often don't appreciate the way that game designers try to work within the constraints of their engine. So, you can have a game that looks good, but perhaps partly because the game designers had one hand tied behind their backs. With those types of constraints loosened, you could see the same or better realism in a more diverse range of environments, settings, lighting, and camera angles.
beginner99 - Thursday, May 2, 2019 - link
I rather think they simply didn't have the money to create 2 separate high-end gpus one with and one without RT. To make money from the large dies they needed to jack up prices and that could only be done with features, even if they are in reality not very useful.mode_13h - Thursday, May 2, 2019 - link
I'm not sure about that. I think bifurcating their product line would end up reducing volumes on the RTX-series to the point that volumes would drop and prices would rise even further, which could push them out of the hands of even more consumers. Also, that would make it more expensive for the render farms and AI workloads they're trying to serve, in the cloud. So, the best move is to get the largest group of users to help shoulder the cost.But, there's another reason not to do it... I think their strategy was to leverage their current market-dominance to force ray tracing hardware into the market. The goal being to raise the bar so that even if AMD and Intel caught up with conventional GPU performance, Nvidia would have an early lead in ray tracing and AI.
Like it or not, they do seem to be succeeding in breaking the typical chicken/egg problem of technological change, in this instance.
Skeptical123 - Tuesday, May 7, 2019 - link
I'm sure Nvidia had the money to create two separate high-end GPUs. I think it was simply a business decision to maximize profits not to do so. After all their internal company structure has used for years now the same basic chip design for home and industry uses just "cut down" to tier their product line pricing. I assume any changes there while less visible than say losses from reducing scale of manufacturing are far from consequential.mode_13h - Tuesday, May 7, 2019 - link
You might be right on the pricing point, but this was clearly a strategic move to build the installed base of Tensor and RT cores in gaming PCs. That installed base is needed for software developers to user their capabilities, which will give Nvidia a lead that's difficult for AMD and Intel to erode.Basically, Nvidia is trying to make the GPU race about something more than mere rasterization performance, since it's only a matter of time before Intel and/or AMD eventually catch up on that front.
HStewart - Wednesday, May 1, 2019 - link
One thing to consider is that typically Intel will place it newer technology on enterprise components first before more it to main stream. Gamers are very small subset of Intel businessHStewart - Wednesday, May 1, 2019 - link
BTW Wccftech had no report of this, but they were fast to blab that Intel will not have 10nm till 2021 or 2022 for desktop and only limited qualities earlier. In truth, people really don't know what is next. Also blab that AMD will be on 5nm before Intel is on 10nm or 7nm.HStewart - Wednesday, May 1, 2019 - link
This was for wrong article - oh I hate forums that don't allow to delete articles. - so be it.sa666666 - Wednesday, May 1, 2019 - link
It must *REALLY* grind your gears that many people aren't dyed-in-the-wool fans of Intel. I think you see it as an affront to your existence or something.Korguz - Wednesday, May 1, 2019 - link
sa666666 come on... leave the intel fanboy fanatic HStewart alone.. its very upset because intel fell asleep again... forgot how to innovate, left us stuck at quad core in the mainstream for what.. 4 or 5 years, only gave us a performance increase over the previous generation by what.. 10% over the previous generation, was the leader in manufacturing, but now isnt... and now, finally woke up, and is scrambling to catch up in almost every thing..... his beloved intel, is no longer the leader in cpu, or process tech... and he doesnt know how to deal, or handle it...mode_13h - Wednesday, May 1, 2019 - link
They didn't fall asleep - they spent too much of their profits in dividends and stock buybacks, instead of plowing it back into their manufacturing tech.Korguz - Thursday, May 2, 2019 - link
heh... same difference.. either way.. intel stumbled.. and is playing catch up again....Lord of the Bored - Wednesday, May 1, 2019 - link
More like Intel won't make something available on mainstream parts if they can fuse it off and sell the same part with the fuse unblown for a 10x markup.Yojimbo - Wednesday, May 1, 2019 - link
The FMX conference is about visual effects. Intel made this announcement letting those visual effects (film industry) customers know that they are planning on releasing GPU-acceleration for render farms. It should not be taken to mean that Intel will not be coming out with ray tracing acceleration for consumer GPUs until later or not at all, since that seems to be out of the context of the announcement. In fact, it's a good guess that since Intel will have ray tracing acceleration capabilities then they will be interested in adding them to their consumer GPUs.Yojimbo - Wednesday, May 1, 2019 - link
Oh, and don't take the "holistic platform" thing too seriously. All the information Intel releases publicly is heavily influenced by marketing. Intel has a CPU and their main competitor in terms of render farms is NVIDIA, who doesn't have a CPU. So it's probably much like how AMD has been marketing their data center approach as "hey we have GPUs and GPUs so we can combine them" without telling anything specific and significant why that makes any difference. I would certainly not assume that Intel has any superior way of combining resources when using their CPUs with their GPUs in comparison with what could be done with their CPUs and NVIDIA's GPUs, unless, of course, Intel restricts access from their competitors to a faster data pipeline between their CPU and their GPU. I'd imagine that would land them in antitrust trouble extremely quickly, though.mode_13h - Wednesday, May 1, 2019 - link
> I would certainly not assume that Intel has any superior way of combining resources when using their CPUs with their GPUs in comparison with what could be done with their CPUs and NVIDIA's GPUsIt's called CXL, and Nvidia isn't (yet) a consortium member (Mellanox now is, but that might not mean anything for Nvidia's GPUs...):
https://www.anandtech.com/show/14068/cxl-specifica...
Yojimbo - Thursday, May 2, 2019 - link
I know about CXL but it was not important to the discussion. The point was any faster data pipeline. Intel most likely came out with CXL because of their GPU and FPGA strategy. They could have allowed it years ago, as there has been a clear market demand for it for a while. But Intel has had such a dominant market share in servers that they didn't really need to worry about lacking that capability.They cannot restrict it once they have it on their platform, as I said before. NVIDIA doesn't need to be a consortium member in order to use it. It's being delivered as an open industry standard, probably because once Intel decided to go down that road they saw it as advantageous to try to kill off CCIX so they have more control over the situation.
wumpus - Wednesday, May 1, 2019 - link
I still wouldn't expect more than minimal support for hardware ray tracing (although it looks like the critical issues for the RTX units involve filtering the ray tracing effects, so presumably more support for that, especially if it includes additional machine learning bits [which have a much higher market anyway]).Then again, that assumes a non-raytracing graphical need for GPUs in the datacenter. Aside from google wanting to move gaming there, I don't think there's much of a need. Perhaps they will be built entirely around raytracing (with the obvious ability to provide compute for all the non-graphics GPU needs in the datacenter: I'd think these should be higher than the need for raytracing, but can't be sure).
mode_13h - Wednesday, May 1, 2019 - link
> it looks like the critical issues for the RTX units involve filtering the ray tracing effectsThat's just for de-noising global illumination, where they bring their tensor cores to bear on the problem.
Tom's has a pretty good article discussing the different types of ray tracing effects and benchmarking games on various RTX and GTX cards to see how they handle the load. Definitely worth a look!
https://www.tomshardware.com/reviews/nvidia-pascal...
Cullinaire - Wednesday, May 1, 2019 - link
Forget Ray tracing for a moment... Let's hope they get raster right first.casperes1996 - Wednesday, May 1, 2019 - link
Whilst I wouldn't say surprising for an Intel graph, the astonishingly meaningless amounts of information in that graph at the top, is quite spectacular. It can essentially be broken down to "hardware gets faster with time, by some unspecified value, and there'll be multiple levels of performance for some hardware"...It says teraflops to petaflops in the title, but the performance axis is unlabeled, so we have no idea where the divide is. And does the circle being bigger mean that the products within that family will range from the top and bottom of the circle radius relative to the performance axis? In the pop-out extending out from the Xe family, do the product categories then correspond to performance within the radius of the circle relative to placement in the stact, or does the pop-out stack itself place the performance on the axis? Or is it all just nonsense anyway with no meaningful placement on the performance axis outside of "faster per year"?
mode_13h - Thursday, May 2, 2019 - link
You're over-thinking it. All you're supposed to get from that slide is:* launch in 2020
* full product stack, from integrated + entry-level to enthusiast & data center
* (presumably) up to teraflops per GPU, scaling up to petaflops per installation.
That's it. Now calm down and stop trying to treat marketing material as though it's real technical literature. In my experience, it's only other marketing people who seem to have trouble seeing through the smokescreen and spin that pervade industry marketing material.
mode_13h - Thursday, May 2, 2019 - link
Oh, and I guess the other thing is that Gen11 launches first, and is a distinct generation from Xe.