Comments Locked

100 Comments

Back to Article

  • AshlayW - Thursday, March 19, 2020 - link

    Oh god my eyes.
  • blppt - Thursday, March 19, 2020 - link

    Its nice and all, but can AMD finally get a GPU out there that can take on Nvidia's best? The RTX2080ti has been around for a year and a half now, and AMD's best is a '1080ti challenging' 5700XT?

    Long past due for an AMD flagship gpu.
  • FreckledTrout - Thursday, March 19, 2020 - link

    Big Navi will do it i'm certain of it. Not sure if it will be good enough for the next gen 3080ti but everything we have learned so far it should outperform the 2080ti.
  • whatthe123 - Thursday, March 19, 2020 - link

    I mean if you compare the die size of the 5700xt to the 2080ti, AMD theoretically could have a competitor to the 2080ti by now. For whatever reason (probably keeping profit margin up) they've decided to cap their chips at 5700xt level. Unless RDNA 2 is a complete dud it should pass the 2080ti at least, but they don't seem interested in providing a halo product to compete with nvidia.
  • nevcairiel - Friday, March 20, 2020 - link

    The current Navi chip couldn't scale up that much if you consider power requirements.
  • whatthe123 - Friday, March 20, 2020 - link

    Lower clockspeed + more compute units. 5700 draws significantly less power while not having significantly less fewer shaders, suggesting the frequency contributes a lot to the XT's power draw. There are also reports of XT's running at lower voltage, so binning could also help for a halo product, but that would eat into their margins.
  • Sefem - Saturday, March 21, 2020 - link

    Report? any chip could be undervolted and Turing isn't an exception
  • whatthe123 - Saturday, March 21, 2020 - link

    I never said turing couldn't be undervolted, I said it's possible they could bin chips to try to compete with turing.
  • Spunjji - Monday, March 23, 2020 - link

    Yes, whatthe123, but Sefem already had a stock argument ready and they wanted to use it - even though it wasn't relevant. xD
  • yeeeeman - Friday, March 20, 2020 - link

    The die size of RTX 2080 Ti is taken (by a not insignificant amount) by RT cores and Tensor cores. So, to put it the other way, RTX 2080 Ti GPU would be much closer to RX 5700XT if it weren't for the additional cores.
    Anyway, AMD desperately needs some big improvement in efficiency with RDNA2, because TBH, RDNA1 even on 7nm is bad. It has worse efficiency than Turing made on 12nm FFN ...
  • juliovillaz1990 - Friday, March 20, 2020 - link

    Are you serious? the RTX 2080 TI has 68 SM, compared to the 40 CU of Navi 10, the 5700 XT is comparable to the RTX 2070 super in compute units, not the 2080 TI
  • CiccioB - Friday, March 20, 2020 - link

    Are you serious? Comparing number and dimensions of "complex cores" of completely different architectures?
  • sullami - Saturday, March 21, 2020 - link

    Die size comparisons are indeed a thing and have been for some time. 251 mm2 on the 5700xt and 775 mm2 for the 2080 ti.
  • Sefem - Saturday, March 21, 2020 - link

    Bad idea to compare die size on two different node, it's better to compare transistor number
  • CiccioB - Monday, March 23, 2020 - link

    Yes, and this brings what information?
    Have you made a list of each single feature each GPU has?
    RT, tensor, mesh shading, VSR, INT+FP...
    And which node they use? Do you know the use of a node is not a particular feature of a single producer but its adoption is based on many factors included its cost/advantages.

    AMD just used the 7nm node advantages to create a chip that is somewhat on par with a GP102 3 years later. That's the chip you have to compare to for feature set. And it did that using the same power budget and transistor count. 3 years later with a node advantage.
  • Spunjji - Monday, March 23, 2020 - link

    People like CiccioB remind me of the Intel stans who panned Ryzen on release.
    "It's not better than the competition, so it might as well not have shown up".
    2 generations later...

    RDNA isn't as good as Turing, no. It's much better than GCN, though, and sets the path towards a far more efficient architecture. The second step along that road looks promising already.
  • Qasar - Monday, March 23, 2020 - link

    cicciob also said this in another thread : " start studying what's technology, what is marketing and what is fanboy hopes so in a discussion you can use the proper arguments to try to have a point.
    You continue to talk about "other ones" review to prove Navi is good. That simply means you can't make your own opinion. And that's simply because you do not know anything about technology and how to evaluate it. "
    so according to him, those of us that dont know about technology, shouldn't read reviews to help us decide what to buy and what not to buy, and, he just throws insults and names at you if you disagree with him. his views, seem quite arrogant to me. i dont know much about technology, that's why i read reviews ( as im sure most do, as well ) and based on those reviews, i decide what to buy. Also, according to him, navi is pretty much an utter failure. i would take what he says, as bias, and very opinionated.
  • CiccioB - Monday, March 23, 2020 - link

    I have understood that people like you feel offended when they are revealed for what they are... ignorant that can't even understand simple sentences. Let alone an entire article full of technical data.

    Of all the sentences you have quotes, tell me where I say or let it understand I am saying that you can't read a review and trying to understand it the best you can to buy what you think is the best for you.
    You can read (better, you MUST read) the more you can, but you should not comment when your understanding level is on the border of functional illiteracy as you have just demonsterated.
    Read, look, hear (most of you can understand things only if told as they can't read properly) but my advice is not to come here posting stupid things as you did.
    It makes you appear what you are.. simply ignorant that try to get a clue on things they can't understand but they want to have a point and write stupid things one after the other.
    You just did that, proving my point.
  • Qasar - Monday, March 23, 2020 - link

    yea, ok there cicciob, and there you go, resorting to insults, and you call me ignorant ? yet, you are very arrogant, just because you say you understand full articles of technical data, means every one else should as well ?? get over your self and get off your high horse.
  • CiccioB - Tuesday, March 24, 2020 - link

    Again you can't follow a simple cause-effect pattern and you come to call other arrogant.
    If you can't understand what's written in the article it your business.
    But is you come here to write absurd and childish comments it is offending other ones that have a better technological knowledge.

    <blockquote>ust because you say you understand full articles of technical data, means every one else should as well </blockquote>
    Just for curiosity and to understand how pathetic you are in understanding simple text, where in my comments I do ever suggest such a thing?
    What I'm saying is that if you do not know how an engine works to not go to technical forum and saying stupid things about how cylinders should be and why the turbo pressure is wrong.
    Doing that and then come up with this childish comment out of nothing, demonstrating you have clear problem understanding SIMPLE TEXT (let alone numbers and put them together) is just ridiculous.
  • Qasar - Tuesday, March 24, 2020 - link

    keep throwing insults, that's all you can do and all you know how to do. BTW you use block quotes ( which don't seem to work on this comment system ), i use quotation marks, same thing, its called quoting someone. you really need to get off your high horse, and get over your self, you are so arrogant, you don't even realize it
    .
    " where in my comments I do ever suggest such a thing? " almost every post you make say it, look, just be cause you claim to understand everything about technology, doesn't mean every one else does. that's why people come to sites like this, they have an interest it technology, and would like to try to learn a little more about it.
    so what ever, again, get off your high horse, and get over your self.
  • CiccioB - Monday, March 23, 2020 - link

    The problem is that to gain efficiency AMD has doubled the transistor budget.
    It did that to try to overcome all GCN limitations. Now RDNA seems a much better architecture than what is was GCN, and actually it is. Unfortunately it comes without ANY of the more advanced features that already are present in Turing.

    RDNA has to be compared to Pascal in terms of efficiency and feature richness (or better lack of).
    Do a direct comparison and see where AMD is more than 3 years later.
  • CiccioB - Monday, March 23, 2020 - link

    BTW, people like you make me thing to those that wrote everything and anything to try to defend Bulldozer and when Zen came out just understood what is a winning architecture.
    GCN is the same. It is a loser (even at the fat obese version of RDNA). When AMD will come with a real competitive architecture (which means GCN will be discarded completely as they did with Bulldozer fail) you'll see what that will mean for competitors (and at that time they will be two, not only one).

    But for now, RDNA is just a obese GCN that does not go anywhere seen it uses too much transistors and a lot of energy despite the much improved PP. You see, any architecture can be good gived enough transistors and energy. And it is good as much as the performance the competition can achieve with same resources. And when you are not competitive, guess what happens? Yes, they start doing discounts. Sometimes even before actually launching the product.
  • Spunjji - Monday, March 23, 2020 - link

    Not this again...
    1) Efficiency of RDNA on 7nm is not worse than Turing on 12nm, it's about as even as you get.
    2) We already know RDNA 2 is ~50% better on PPW.
    3) Absolute nonsense about die sizes there.
  • Santoval - Thursday, March 19, 2020 - link

    Big Navi will almost certainly outperform Turing (series 20xx) but not Ampere (series 30xx). Since Βig Navi will compete against Ampere Nvidia will hold the performance crown for the next round of graphics cards as well. So the status quo will not change.
  • djayjp - Saturday, March 21, 2020 - link

    This. But not sure about performance at the same price point.
  • CiccioB - Tuesday, March 24, 2020 - link

    Price or costs?
  • Sefem - Saturday, March 21, 2020 - link

    On rasterization it's perfectly plausible as they have a full node + of advantage but on ray tracing? that really has to be seen
  • blppt - Thursday, March 19, 2020 - link

    Thats my point though---they're (possibly) releasing a GPU that will match or beat a NVIDIA flagship thats been around since late 2018. And 3080ti is releasing soon too.

    I want another 7970 situation---when it came out, no consumer Nvidia product could match it.
  • lmcd - Thursday, March 19, 2020 - link

    More important AMD win compute, for the sake of their long-term profits. Vulkan Compute will be their only hope realistically, with OpenCL appearing dead in the water.
  • yeeeeman - Friday, March 20, 2020 - link

    They are behind and can't do too much about it until they actually improve the efficiency of the design. GPUs have a power budget of 250W max so even RX5700XT which is made on 7nm maxes this out...while being more of a mid to high-end GPU.
  • CiccioB - Tuesday, March 24, 2020 - link

    Wrong, they matched 2017 flagship GPU, GP102.
    With RDNA2 late this year they are targeting at beating competitor's 2018 flagship GPU.

    When 7970 came out AMD had the process node advantage that Nvidia used just few months later. The result are recorded: 3 months later it needed a puny x60 class GPUs to match AMD flagship and that started Nvidia promotion of its tiny chips to higher and higher levels and prices.

    AMD is trying to do that again, that is trying to overcome the competition with the use of a better production node. But this time they are so behind that they could not get out an absolute winner even in rasterization. And it's already a dead technology, seen it does not support any of the new features that the new consoles (= new games) will support in a year or two.
    While Turing support them, despite costing a bit more (but consuming less on a older PP.. that's absurd!)
  • DannyH246 - Tuesday, March 24, 2020 - link

    @CiccioB - what is your point exactly? AMD is behind? So what? Everyone knows this!!!! Even AMD know it. Wow what a tech savvy guy you are at working out something so glaringly obvious.

    Now back in the real world - AMD have been cash starved and have had to make do the best they can with what they had. They focused their remaining cash on Zen. And thankfully it saved the company. When Zen came out it wasn't the home run over Intel. But they iterated quickly and brought out Zen+, it still was not quite as good as Intel. But again they iterated quickly and now we have Zen2 which arguably is the best CPU around. Furthermore they are continuing to iterate with Zen3 etc. A similar thing is now happening with Navi. Yes, RDNA 1 isn't as efficient an architecture as Nvidia's, but its better than what they had before and at least gave people more options in the market. However it looks they are iterating quickly just like with Zen. RDNA2 will be out soon and looks to be decent uplift over RDNA1. It will most likely not quite be as good as Nvidia's absolute best but do you think AMD is stopping there?

    Having a competitive AMD in CPU & GPU is good for US!!!! Nvidia & Intel have been charging whatever the hell price they like for their respective CPU's & GPU's for years as there was no competition. So, you posting on here "oh its not as good Nvidia's" 2yr old or whatever serves absolutely ZERO purpose unless you work for Nvidia or are paid by them in some way. Anything else just makes you look foolish.
  • Qasar - Wednesday, March 25, 2020 - link

    DannyH246, dont waste your time, he will just reply with insults, more BS and personal options about how much an utter failure navi is, and continue to bash amd's efforts, and praise nvidia, as he hates AMD, that alone should be obvious, no matter what he says. as i have said, he needs to get over himself, and get off his high horse.
  • Quantumz0d - Friday, March 20, 2020 - link

    I feel really surprised when people are saying Xbox can outperform 2070S / 1080Ti and RT on top. Its stupid.

    AMD R&D for RTG is shit. They struggled for a crown GPU since Maxwell, took 3 years to match a 1080Ti which is a GP102 silicon. And on 7nm node vs Nvidia old 12nm / 16nm. Plus the diesize of TU102 754mm2 is insanely big on top of the fact that it has the Tensor for AI. Even on mobiles AMD is shit. Their RX480 GPUs suck a lot of power to 150W and lost to 970M.

    Somehow they managed to outperform Nvidia ? Gimme a break. DF is too much positive. Even the shitty Minecraft demo was crap. Running a crappy Gears 5 (read on steam) with some pseudo remastering plus exclusive build on XSX vs 2080Ti is shit. The official benchmark ran on what ? A shitty Zen+ EOL TR processor instead of Ryzen 3700X or 9700K that's a big joke..

    Adding the uncore from Zen2 and 2x 4C ccd chips at 80-90mm2 with this new silicon adding the highly inefficient AMD GCN/RDNA2 its not going to beat at all. A 2080Ti ? epic. AMDs shitty Accelerators are crap vs CUDA and on top Nvidia has AI R&D in IoT/Automotive. They spent a whole uArch with costly HBM2 just for Compute, 3 years back. Volta and you guys are assuming AMD beating that goliath ? Add R&D of AMD x86 budget too.
  • blppt - Friday, March 20, 2020 - link

    They might beat an equivalent PC (talking next gen Xbox, right?) in a specific game with 2070/1080ti because of the inherent bump that console coding vs. PC coding automatically gets (because of fixed hardware).

    Look at RDR2---somehow, they got it to run 4K, 30fps native rendering on an outdated underpowered Jaguar cpu and outdated pre-Vega GPU, and 8GB total available memory on Xbox One X. Example of fixed hardware platform benefits right there.
  • D. Lister - Friday, March 20, 2020 - link

    RDR2 on the XBox1X runs at much lower graphics settings though. 4K isn't enough by itself, you need complex textures and geometry to get any visual benefit out of 2160p, otherwise "4K" is little more than a fancy marketing buzzword to swindle kids and rubes.
  • blppt - Friday, March 20, 2020 - link

    I understand that 4k on the X1X is nowheres near "far right slider" on the PC, but when you consider the pathetic CPU and ancient GPU that they got it to run on at 4k native, its still very impressive.
  • Sefem - Saturday, March 21, 2020 - link

    It's not really native 4K, some effects (SSAO, volumetric, etc..) use lower resolution buffers and generate visible artifacts, some of the setting are even lower than the lowest available on PC. Comparison aren't so simple
  • Spunjji - Monday, March 23, 2020 - link

    Too much crap to fully debunk, but here's a taste: Vega 56 on laptops equalled the 1070 and RX480/580 equalled the 980M. 🤷‍♂️
  • CaedenV - Friday, March 20, 2020 - link

    Big Navi is kinda like Half Life 3. It is coming... maybe... someday... it would be nice if it would come out...
  • nt300 - Thursday, April 9, 2020 - link

    Most recent rumors suggest Big Navi is going to be more than 80% faster over the current 5700XT which puts it around 30% to 35% faster over the 2080Ti. This tells me that RDNA2 is meant to compete with the 3080 series lineup.
  • yetanotherhuman - Friday, March 20, 2020 - link

    1080 Ti is still way ahead of the 5700XT. AMD haven't even reached it.
  • blppt - Friday, March 20, 2020 - link

    Well, I was giving it the benefit of the doubt...I know in at least RDR2 the 5700XT is very close to 1080ti performance, and that is probably still king of the "system melters" of AAA games currently.
  • willis936 - Thursday, March 19, 2020 - link

    It's impressive looking and all, but couldn't they have toned it down a smudge so they wouldn't have cinematic frame rates?
  • jeremyshaw - Thursday, March 19, 2020 - link

    If you have a RTRT capable card, try Quake 2 RTX.

    The Nvidia intern (now a PhD) who initially developed it added a hotkey to make all surfaces a reflective mirror, because he wanted to dispel the myth that mirror reflections were costly in terms of raytracing. Remember, he did most of this work years BEFORE Nvidia started moving RTX hardware, and was originally a CPU driven raytracer (before moving to CUDA, then in 2018, Vulkan RT extensions).

    As we now know, clean mirror reflections are the least computationally intensive RT feature around.

    Basically what I am getting at, is AMD chose all those mirrors for a reason. They cannot turn it down a smudge without lose even more performance.
  • whatthe123 - Thursday, March 19, 2020 - link

    RT is definitely less resource intensive than the old brute force "render everything again at a mirrored angle" method, but you still need to draw on those surfaces when you make them reflective. I don't think it was a good idea to just smear mirrors everywhere. The mirrors closest to the model also look unusually blurry.

    I don't think a lot of thought was put into this demo. Crytek's raytracing demo was running on AMD hardware and was a much better demo of the benefits of tracing.
  • CiccioB - Friday, March 20, 2020 - link

    Crytek raytracing demo was unveiled as a fake, as most of the ray tracing data were pre-calculated and not done real time.
    That's why it was so "smooth" even on a not RT HW accelerated GPU
  • Sefem - Saturday, March 21, 2020 - link

    They are blurry because they are half resolution
  • Santoval - Thursday, March 19, 2020 - link

    "As we now know, clean mirror reflections are the least computationally intensive RT feature around."
    I was unaware. So which are the most computationally intensive features of ray-tracing? Are refraction effects, caustics, indirect lighting (and, more generally, global illumination) more expensive?
  • Zizy - Friday, March 20, 2020 - link

    For mirror reflection you simply have one ray as input, one fixed ray as output and you don't need many of them to get accurate visuals of the structure. Despite many bounces, number of rays remains manageable because you have one extra ray each time. Besides shiny look here, each ray probably bounced just few times on average (say 3x). On the other hand, something diffuse requires various tricks or tons of output rays to get something resembling correct visuals - each such surface requires hundreds of rays even if you want something as crude as sampling on 5 degrees. (that said, this is from my experience doing optical simulations, I never did RT in games)
  • willis936 - Friday, March 20, 2020 - link

    Walking a ray doesn’t seem very computationally intensive compared to, say, rendering another thousand pixels from a different perspective (ie an entire shiny surface).
  • Sefem - Saturday, March 21, 2020 - link

    Thread divergence caused by incoherent rays generated by rough surfaces thank GPU performance
  • D. Lister - Saturday, March 21, 2020 - link

    "So which are the most computationally intensive features of ray-tracing?"

    Global illumination, with raytraced ambient occlusion and multiple intersecting dynamic light sources.
  • yeeeeman - Friday, March 20, 2020 - link

    Yep, I also suspect their ray tracing hardware is a bit hit and miss for this first generation.
  • Sefem - Saturday, March 21, 2020 - link

    Exactly, and reflection in the AMD demo are half resolution too but I don't know how deep (reflection of a refection of a reflection...) they are
  • Threska - Thursday, March 19, 2020 - link

    The one time "Oh! Shiny." applies.
  • Santoval - Thursday, March 19, 2020 - link

    The demo has way too many mirrors and reflective surfaces. I wonder how it would look if the sky was not cloudy and it was high noon..
  • mode_13h - Friday, March 20, 2020 - link

    They do that just to show it's not faked and so you can see how many secondary-bounces they support. Another mark of quality to look for is aliasing in the reflections.
  • Dribble - Friday, March 20, 2020 - link

    They do that cause it's easy - they have very few different types of surface in that demo to model, and they are all super shiny which is the easiest to ray trace. That said you've got to start somewhere, but I hope they can do a lot more then that.
  • mode_13h - Friday, March 20, 2020 - link

    Errr... no.

    First, I don't know why you think the number of surface types matters so much, but I don't.

    Second, there are plenty of surfaces with diffuse reflections and even some with ripples. They're not all super-shiny, nor are they all flat. Curved, reflective surfaces are harder to ray trace well, due to the increased demand they place on anti-aliasing (which requires multiple rays per pixel).

    Third, super shiny surfaces aren't easiest to ray-trace, because you have to worry about lots of secondary reflections. Non-reflective surfaces are easiest to ray trace.
  • mode_13h - Friday, March 20, 2020 - link

    In that last sentence, I should've also stipulated non-refractive.

    And this presumes we're not talking about global illumination, where all surfaces are effectively reflective, to some degree.
  • Sefem - Saturday, March 21, 2020 - link

    It matter anyway but it matter more because of the implementation (inline rt), the problem is tread divergence which kill performance on GPUs, dynamic-shading ray tracing gets away with a shader table and a different execution model but inline ray tracing is really sensible to that, that's why generally speaking it is better suited for situation were less or simpler shader are used while in other cases the former perform better.

    yes, curved surface are a bit harder than a flat one but it's anywhere near diffusive surface and do have they secondary reflection though? and how deep they go? are rt reflection applied to diffusive surfaces? I don't think the inline approach love much diffusive materials
  • watzupken - Thursday, March 19, 2020 - link

    Its a demo for RT, so shiny is a "in your face" way to showcase this technology. If you look for RT in games, you may have to look around to spot the RT taking place.
  • Samus - Thursday, March 19, 2020 - link

    AMD used to have all the best demo's.

    Not sure what happened :P
  • mode_13h - Friday, March 20, 2020 - link

    Granted, Nvidia's RTX demos were better. However, ask yourself whether you'd rather AMD put its resources into the underlying tech or fancy demos (assuming you can't have both).
  • Dizoja86 - Friday, March 20, 2020 - link

    The cost for a tech demo is not remotely comparable to the cost of developing hardware. If AMD only has a few thousand dollars to develop their next GPU, then they're in serious trouble.
  • mode_13h - Friday, March 20, 2020 - link

    Don't think hardware, but rather software. Their software development resources are clearly stretched. They probably don't have software development resources to spare on better-quality demos.

    That said, you make a good point about budgets, and I suppose they could've hired some outside firm to make a nicer demo.
  • yeeeeman - Friday, March 20, 2020 - link

    It is not like AMD made an amazing RT hardware because they cheaped out on the tech demo...
  • Dizoja86 - Thursday, March 19, 2020 - link

    Good lord that was hideous.

    I know there's raytracing going on, but this just reminds me of how games looked in the early 2000's.
  • shabby - Friday, March 20, 2020 - link

    Yup I think amd forgot how to make demos.
  • mode_13h - Friday, March 20, 2020 - link

    OMG, the face-sag of that dude in the corner of the AMD slides immediately makes me worried he's having a stroke. I know there are other things that can cause that, however.

    I knew a guy who had a minor stroke at about the age of 25. It's not just seniors that have to worry.
  • Spunjji - Monday, March 23, 2020 - link

    It's entirely possible that's the normal for his face - Bell's Palsy can be caused by many things and I know a few people who have it permanently.
  • mode_13h - Friday, March 20, 2020 - link

    Regarding the video: OMG, look out! It's Nvidia behind you!
  • chrkv - Friday, March 20, 2020 - link

    Is there a reason why only Xbox is mentioned to have RDNA 2 while PS5's GPU is confirmed by Sony to have the same architecture?
  • Zizy - Friday, March 20, 2020 - link

    Because this is a DXR tech demo and only Xbox will use that.
  • BlakeBB - Friday, March 20, 2020 - link

    Bite my shiny metal as.s!
  • BenSkywalker - Friday, March 20, 2020 - link

    That looks like a cube map demo, and not a good one.
  • willis936 - Friday, March 20, 2020 - link

    Cube maps only work for static scenery.
  • yeeeeman - Friday, March 20, 2020 - link

    Where is the ray tracing? Looks like a demo made for Radeon 9700 Pro.
  • Spunjji - Monday, March 23, 2020 - link

    wHeRe Is ThE rAyTrAcInG

    I didn't even ask that when I saw Battlefield 5 running, and the raytracing in that was borderline invisible 80% of the time. It's literally everywhere in this demo.

    The funny part is the demo really does look like mid-2000s GPU box art, but you had to go and be a tool about it.
  • CiccioB - Friday, March 20, 2020 - link

    Let's return when we are speaking about real raytracing effects: global illumination and transparency (most undervalued gfx effects ever ), something that no rasterization tricks can simulate but for pre-determined angle of views.

    We do not know exactly what is real time computed and what's not in this demo as pre determined walking path and angle of views can hide pre-rendered illumination and filtering data, things that cannot be used when in free view mode.

    BTW, I hope the race for the useless high resolution in order to improve image quality will end with the wide spread use of ray tracing.
    Today image quality is still sh*t i terms of polygons and light accuracy and no high resolution texture mapped a 4k or 8K can fix that.
    See Bluray HD computer generated animations: they are "only" FullHD". But some of them are incredibly realistic none the less.
    I hope to see this kind of image quality in near future, not those crappy 10 to 100 polygons hyper textured all over a 65" screen.
  • Sefem - Saturday, March 21, 2020 - link

    Many developer changed their view with ray tracing, nicer pixel is better than more pixel
  • CiccioB - Monday, March 23, 2020 - link

    But many users that have bought a 4K monitor as they believed to improve image quality with that will not accept a smotth FPS generation at low resolution. hey want 60+ FPS at 4K full on. THat will be the only way to make them accept raytracing as a useful feature.
    Otherwise thy'll will always go with big texture packs, no light effect, 10 polygons and be happy with rasterization tricks that "improve image quality".

    See the comments they do when looking at the performances you can have with Turing and RT on.. "ohww, that thing can't stand 60 FPS at 4K... useless".
  • Spunjji - Monday, March 23, 2020 - link

    ...? 😕
  • Rick95 - Friday, March 20, 2020 - link

    Isn't the point of ray tracing photorealism?
    Does that shinny mess look realistic to you?
    Yeah, just what I thought...
    Cool to see ray tracing running on amd hardware though
  • willis936 - Friday, March 20, 2020 - link

    If you don't understand a technology, then a technology demonstration is not for you.
  • CiccioB - Tuesday, March 24, 2020 - link

    Well, not.
    Raytracing is just more and better than simply shiny reflections.
    That's what was used as demonstration 30 years ago with glass balls and shining walls boxes.

    Raytracing is about the quality of the light, and there are tens of ways to demonstrate its quality and advantage.
    But it all comes to the judgment capacity of the gamers, that's null.
    They do not care about good light effects, waking in a forest with shading light and real volumetric occlusion and shadows.
    They just want the fire reflected on the water (water that is still a 10 polygon mesh to spare on the insufficient geometric power of the consoles), shiny glasses where they can mirror their character, that's the fancy glance of the effects.
    They do not care that one could create an atmosfere like those in movies because too much realism does not allow them to see things if global light is not emulated by artificially enhancing brightness level of the entire scene.
    They would remove the effect to see the enemy hidden in the shadow, and remove real calculated transparency to just have the texture removed to see the enemy behind the curtain.

    Raytracing is as advanced as you want it to be. It just needs 2 things: processing power and the wish of the gamers to accept it as a way to describe more realistic scenes, despite the thing can make the game more difficult (or slower).
    Here we have the basic of the two factors: shiny (and let's say it, boring) reflections that are the things that any gamer of any age just recognizes as fancy.
  • djayjp - Saturday, March 21, 2020 - link

    Looks like poo
  • D. Lister - Saturday, March 21, 2020 - link

    This "demo" was a recording, so technically it is more a proof of concept than an actual demonstration. It will be a demo when it is running on AMD hardware in real time. Considering this is from AMD, the kings of marketing hype and unfulfilled promises, I wouldn't be terribly surprised if they actually used a bunch of Quadros to make this video. :p
  • Qasar - Sunday, March 22, 2020 - link

    " the kings of marketing hype and unfulfilled promises " i thought that was intel :-)
  • CiccioB - Monday, March 23, 2020 - link

    Oh, well, let's see... Fiji, Polaris, Vega... and Navi which just hides behind the use of a new more advanced node but on comparison with what is already on the market it is still well behind and already obsolete.
  • Spunjji - Monday, March 23, 2020 - link

    Fiji was *slightly* over-hyped. Polaris was exactly what they said it was. Vega was massively over-hyped, and the dude who did that now works for... *checks notes* Oh, Intel.

    For an "already obsolete" chip, Navi sure does sell just fine.
  • Qasar - Monday, March 23, 2020 - link

    ok there cicciob, most of your posts seem to reek with anti amd bias. so what you say, is partly meaningless... so what ever...
  • CiccioB - Tuesday, March 24, 2020 - link

    Yes, showing facts is having anti AMD bias, while going on prizing failure architectures that do not bring any advantage to the market evolution in terms of technology or price is a good thing that let you constantly hope that the next one will be the right GPUs!
  • Qasar - Tuesday, March 24, 2020 - link

    WHAT facts ? all you have done is post your biased BS, and options, with NO proof, post links or sources for your BS claims.

    " while going on praising failure architectures that do not bring any advantage to the market evolution in terms of technology or price is a good thing " news flash for you, intel has been doing this for years now, same refresh on the same architecture they have been using for how many years now ? sticking the mainstream with quad cores, and pretty much lieing about power their power usage. i dont think i have ever seen you harp and criticize intel like you have been doing about amd.
  • CiccioB - Tuesday, March 24, 2020 - link

    Fiji was over hyped, seen it was a desperate try to finally get something more with more performance than the competition despite its cost: HBM2, ultra big die despite the lack of FP64 (that was a news for AMD) but just came with 4GB (ah, the costs, sometimes matter when they are excessive).
    Plaris was really not was it was sauid to be: I didn't see the 50% power efficiency improvement and it was so computational inefficient that to match a puny 1060 they had to overclock to go over the promised 150W they had planned, so to even be outside the specs of their own already build boards. And to have a small win on the competition puny GPUs they go further to overclock it even more and make it suck more than a 1080. So, no, Polaris was a fail that was just good because it was discounted.
    Vega was the biggest failure of GCN period as it could not match competition flagship, despite its high costs, and it was completely useless also as a computational board. To raise its value AMD has to give for free the driver for HW acceleration for professional applications and cut its price even before launching it in the extreme try to cut Nvidia professional market somewhat.
    They just all finished in discount mode for months in the hope to empty the stores whatever the loss.
    Let's not talk about Vega VII, 300W of technology to provide anything useful in the market it was targeted (that's AI acceleration).

    Navi sells because it is just discounted with respect to the competition offer. It's always the same mantra: there's not a bad product, there are only wrong prices.
    Tell me, honestly, what Navi is going to offer more than a 1080Ti more than 3 years older.
    We are already in RT, VRS, mesh shading, AI acceleration market. What of all this is Navi offering.
    It is just an old piece of technology that is discounted to get a grip 3 years later.
  • Qasar - Tuesday, March 24, 2020 - link

    blah blah blah blag, more BS and options only, and no proof.
    " It is just an old piece of technology that is discounted to get a grip 3 years later " kind of like intel currently
  • nt300 - Thursday, April 9, 2020 - link

    You need to understand AMD's Bulldozer release in 2011 put AMD back many years. AMD put most of its resources into developing ZEN. Neglecting the GPU division & cutting its R&D by a significant amount. This has all been already mentioned in AMD's earnings for years now.

    Now that ZEN, ZEN+ & ZEN2 has been remarkably successful, Dr. Lisa Su herself admittedly moved ZEN engineers in mid 2019 to help with the Radeon Technology Group to get RDNA2 ready for launch in 2020.

    It's remarkable what AMD has done with RDNA1 with a slight change in focus. Not only did the 5700XT force Nvidia to release a Super Series 5-6 months after releasing RTX, but also forced Nvidia to concede on price jacking, or to put it into simple terms, ripping people off with over priced GPUs that are not worth the price premium.

    So ReLaX, AMD is battling on 3 fronts, with CPUs, GPUs & Enterprise. RDNA2 is coming and its going to impress, and yes Nvidia is worried, only because they know this new AMD architecture is going to offer very stiff competition. The days of Nvidia ripping people off are going to end.
  • nt300 - Thursday, April 9, 2020 - link

    Navi aka RDNA 1 achieved what it was meant to, dethrone the RTX 2070 which it did. Why else do you think Nvidia released the Super series lineup. That upset a lot of Nvidia RTX owners when Nvidia pulls that Super series stunt on them.

Log in

Don't have an account? Sign up now