Comments Locked

80 Comments

Back to Article

  • isthisavailable - Monday, May 31, 2021 - link

    Love that they used a GTX 1060 in their presentation. A nice slap in the face to Nvidia.
  • at_clucks - Tuesday, June 1, 2021 - link

    Comparing to a 2+ year old Nvidia card certainly doesn't do AMD any favors. When you want to show off your latest and greatest pick the best the competitor has.

    But this tech will go great with all those 1600+Euro 6800XTs. o_O
  • check0790 - Tuesday, June 1, 2021 - link

    It actually does AMD favours. The 1060 wasn't just a random pick, it's currently the most used card as listed in Steam surveys. It's also a card that does not possess any tensor cores or dedicated hardware to process any of this, where NVidias DLSS needs dedicated hardware to run. So by showing that devs can use this regardless of target hardware and current consoles also use AMD hardware, anyone developing multi-platform titles is more likely to support this instead of DLSS.
  • at_clucks - Tuesday, June 1, 2021 - link

    Sure but from a branding perspective it suggests they can only fight in *that* range. Perhaps that's what they're interested in, targeting that segment. They still place themselves as a mighty low offering.

    In contrast, on the CPU launches or announcements they rarely position themselves against old, lower end Intel CPU, no matter how popular those may have been.
  • AREryilmaz - Tuesday, June 1, 2021 - link

    Mate consumers are far more likely to use such a feature on a lower end card rather than an already mighty 6800XT. The 1060 is the perfect choice here. The scaling seems like such that might breathe new life into an already alive and well 1080ti for example.
  • at_clucks - Wednesday, June 2, 2021 - link

    Mate, consumers don't buy "the game with FidelityFX Super Resolution". They buy what they're given. Showing that the 5 year old GTX 1060 has the best improvement makes it look like AMD is launching a stopgap solution for GPUs that will get upgraded pretty soon anyway. Maybe the GPU availability black hole keeps going for another couple of years.

    And to be honest, maybe that's exactly what they're doing. If it's indeed a DLSS 1.0 equivalent (so if that can be used as a reference for performance compared to an eventual follow-up) then it is a stop gap solution, a 1.0, that thing that might keep old mid-range cards from the competitor chugging along through the GPU crisis. No favors there.

    What did them some favors is that the tech works with such a wide range of GPUs, including Nvidia's.
  • at_clucks - Wednesday, June 2, 2021 - link

    P.S. The 1080Ti that you mentioned would have been a much better reference. It's still a more than decent card that one could conceivably hang on to for a while longer. Comparing to the 1060 feels like showing improvements on a 2nd gen. i5. Something that's on the edge of being upgraded for most users.
  • Murloc - Tuesday, June 1, 2021 - link

    This is an advertisement towards the game developers rather than people who spend lots of money on GPUs. You don't need this technology with a top end GPU.
  • euskalzabe - Saturday, June 5, 2021 - link

    You’d think that’s quite obvious, but the commenter you’re responding to doesn’t seem to get it…
  • mode_13h - Sunday, June 6, 2021 - link

    > You don't need this technology with a top end GPU.

    Who wouldn't want better framerates, even *with* a top-end GPU?
  • euskalzabe - Saturday, June 5, 2021 - link

    They were not comparing performance of AMD cards va nvidia cards. They were simply showing that FSR can also work and be helpful for GeForce cards. This was pretty obvious in the presentation. #facepalm
  • pipyakas - Tuesday, June 1, 2021 - link

    anyone developing multi-platform titles on Unreal Engine 4 is more likely to use its own Temporal Anti-Aliasing Upsample (TAAU), or Temporal Super Resolution (TSR) on UE5, as both of them support a wider range or hardware, and more likely to have better performance/image quality as they are not spatial-only upscaler
  • yannigr2 - Tuesday, June 1, 2021 - link

    You are totally missing the point. AMD is offering to many Nvidia customers, everyone with a GTX 10 series card, something that Nvidia could offer, but decided not to offer.
  • shabby - Tuesday, June 1, 2021 - link

    Im sure they'll change their mind now and offer it.
  • blppt - Wednesday, June 2, 2021 - link

    Well, they can't offer 2.0 on the 10 series because it has no tensor cores. So, it probably wouldn't be worth the effort unless they can get better quality results than they had before without those cores.
  • Railander - Tuesday, June 1, 2021 - link

    they didn't compare anything to the 1060, they compared the 1060 with itself.
    it's a slap in nvidia's face because DLSS only works on RTX cards.
  • webdoctors - Friday, June 4, 2021 - link

    Had to search for it but man.....1060 is almost 5 years old! Time flies....

    "Nvidia introduced its mid-tier GTX 1060 cards with the launch of the 6GB GTX 1060 on July 19 2016"
  • nandnandnand - Monday, May 31, 2021 - link

    "Unfortunately the image quality hit is quite noticeable here. The building and bridge are blurrier here than the native resolution example, and the tree in the background – which is composed of many fine details – easily gives up the fact that it’s running at a lower resolution."

    I'm used to garbage quality, so I'm excited for performance mode.

    That GTX 1060 demo is using the "Quality" (second best) mode for a 41% FPS gain, compared to Godfall on 6800XT which is doubled for that same mode. If we believe the numbers. So "Ultra Quality" could look better than that shot, but with a smaller FPS gain.
  • Someguyperson - Tuesday, June 1, 2021 - link

    If you're used to "garbage quality", why not just run all your games at a lower resolution? That's all FSR is doing here. This is a complete and utter waste of time and effort. If you don't have access to tensor cores to upscale, Unreal's built-in temporal upscaling completely blows this garbage out of the water and it'll run on every platform natively.
  • pipyakas - Tuesday, June 1, 2021 - link

    if you're playing on 1440p Epic Preset like AMD at least. You could lower the resolution to 1080p, and get roughly 30% performance in the process, depending on which game it is.
    If FSR is not better than simply lowering the resolution, while requiring a patch to support existing titles, there's no point to it at all
  • tamalero - Friday, June 4, 2021 - link

    Some screens looks like shit when you use lower resolutions and they attempt to auto fill the whole screen.

    Other players do not like to play with a tiny box instead of their full screen.
  • flyingpants265 - Friday, June 11, 2021 - link

    I thought Fidelity FX used GPU scaling
  • flyingpants265 - Friday, June 11, 2021 - link

    I mean for the adaptive resolution and downscaled resolution and such
  • euskalzabe - Saturday, June 5, 2021 - link

    There is a point: not all games support resolution scaling, or have issues when set to sub-1080p resolutions (blurrier menus and hud, etc). If FSR allows me to essentially do the equivalent of resolution scaling on games that don’t have that feature, that’s a pretty big win.
  • mode_13h - Sunday, June 6, 2021 - link

    > If FSR allows me to essentially do the equivalent of resolution scaling on
    > games that don’t have that feature, that’s a pretty big win.

    Enabling FSR requires some effort by developers. You've got to ask yourself how many developers are going to put in that extra effort who cannot even be bothered to do usable resolution scaling?
  • JasonMZW20 - Tuesday, June 1, 2021 - link

    I think many people will accept some image quality loss when performance increases dramatically (esp. from unplayable fps to playable).

    I tried Nvidia's DLSS on Quality and Balanced in Cyberpunk, and I know it's not the best implementation, but the quality loss is noticeable even at 1080p native (Asus Zephyrus G14 gaming laptop). Turned off RT and DLSS, in the end, as I tend to like higher image quality.

    Maybe if AMD combines CAS with FSR, it might mitigate some of the blurriness. From what I saw with DLSS, that's not perfect either. It's the missing information conundrum. It's always going to be difficult to upscale from lower resolutions unless you can reconstruct at pixel levels (or at least neighboring pixels), which is not going to be real-time. Makes for great AI-enhanced stills or videos, though.

    Side note: Control's initial DLSS 1.9 implementation was entirely shader-based. It's fully DLSS 2.0 now, but it didn't look too bad as a shader-only upscaling algorithm. Maybe there's hope for FSR.
  • jmke - Tuesday, June 1, 2021 - link

    I don't agree with you on that one. In action in Cyberpunk the DLSS Balanced mode with RT Set to Ultra makes the game look so much better compared to RT OFF/Withou DLSS and lower graphical detail in general

    the extra boost in game world richness DLSS offers by allowing you to enable all the bell's and whistles in the game graphics settings, is a very worthy trade off for slightly less "crisp" graphics, and sometimes the upscaling is better than native with AA to remove any jagged edges..

    RT is nice and all
    but DLSS is really where it is at, for the future of games and gaming in general, be it pancake or VR
  • bcronce - Wednesday, June 2, 2021 - link

    What a lot of people don't realize is that DLSS is meant to upscale 4k to 8k, so happens to work decently at upscaling 1440p to 4k, and is not very good at 1080p to 1440p, and horrible below.
  • Dribble - Tuesday, June 1, 2021 - link

    It's already combined CAS as I understand it, FSR = a new version of checkerboard rendering + CAS. i.e. it is just another attempt at a traditional upscaler. You'd might as well use the one built into UE5 - it'll work better and is fully integrated.

    It's no competitor to DLSS and that can't exist until AMD get their own AI cores.
  • whatthe123 - Tuesday, June 1, 2021 - link

    their FSR website says its spacial upscaling so it would be worse than current checkboard methods if that was the case. from the (albeit highly compressed results) it just looks like a naive upscaler, have to wait and see uncompressed screenshots.
  • euskalzabe - Saturday, June 5, 2021 - link

    And then what do you do with games that don’t run on ue5? Having a generic option like fsr is helpful.
  • bcronce - Wednesday, June 2, 2021 - link

    "It's the missing information conundrum. It's always going to be difficult to upscale from lower resolutions"

    The human optical system only has a few megabits of bandwidth. Youtube 1080p 24hz. Yet we know human vision has vastly more detail. We can detect temporal anomalies in single-digit milliseconds, can notice the lack of smoothness of less than 300fps, and can notice details into the 16 megapixel per eye range.

    Similar to audio compression that is tuned for human hearing and not random noise, there is no reason we cannot fill in missing data with something "reasonable".

    In a simplistic nutshell, our optical system is designed to find the "important" data from input, and fill in the rest with educated guesses. One description is that we "hallucinate reality". We really have very little input data from our senses. Most of our experience of detail comes down to the brain filling in the blanks.
  • brunis.dk - Friday, June 4, 2021 - link

    kinda wish i could upvote this!
  • euskalzabe - Saturday, June 5, 2021 - link

    Well said!
  • icoreaudience - Tuesday, June 1, 2021 - link

    Seriously ? This picture quality is barely better than a low resolution image simply upscaled and smoothed. If that's all there is, it's terribly disappointing. Just play the game at lower resolution.
  • Cooe - Tuesday, June 1, 2021 - link

    Watch the actually demo's. They're on YouTube. The ones running on AMD cards & upscaling to 4K all looked freaking GREAT!... But otoh the Nvidia GTX 1060 upscaling to 1440p demo looked like absolute trash. It might only "work" on Nvidia hardware, but kinda not really.
  • haukionkannel - Wednesday, June 2, 2021 - link

    Most likely needs Nvdia driver optimizations. Things that amd allready have and Intel soon will have and nvdia try to delay as long as possible…
  • flyingpants265 - Friday, June 11, 2021 - link

    Randomly watched a Carmack tall from like 2012-2014 era, and he said 4k screens are coming because the companies need something to sell, but they're not that great for gaming. Is 1440p even common yet? I haven't looked around.

    I won't say 4k gaming is useless because I've never tried it. People put down 144hz too, and there's no way I could go back now.

    I would like to see 4k@144hz and 8k@240hz compared side by side. I have a feeling you'd actually be able to see a pretty huge difference.
  • yetanotherhuman - Tuesday, June 1, 2021 - link

    I fear you are right. Without a temporal element, this is likely to be FideltyFX CAS v2, and not a true DLSS 2 competitor.
  • SaturnusDK - Tuesday, June 1, 2021 - link

    DLSS is dead! Time to move on Nvidia.

    FSR will take over just like Freesync won over G-sync. Making an open standard that can be used on all levels of hardware, and not just halo products means the end for a proprietary implementation. It always has, and it always will.

    Now it's up to Nvidia to rebrand FSR as something like DLSS lite or whatever their marketing comes up with just like they did when they finally gave up on G-sync, and just rebranded Freesync as "G-sync compatible".
  • Dizoja86 - Tuesday, June 1, 2021 - link

    You forgot the /sarcasm at the end of your post. DLSS is just getting started, and I think most people predicted that a cruder approach to image upscaling was going to result in equally crude results. DLSS isn't going anywhere, and the self-management of overdrive settings in dedicated G-sync modules is also not trivial (although neither is the additional cost).

    I'm hoping that AMD can improve the quality of FSR as it moves forward, and that it can then implement the software on consoles as well where it could be immensely beneficial, but I wouldn't be surprised if it's just a stopgap until AMD can implement a true hardware-based DLSS competitor.
  • Daeros - Thursday, June 3, 2021 - link

    Don't forget that AMD has a huge advantage here - all games developed for the current generation consoles will be running on AMD GPUs. If you're a major game design house, do you want your team to build the game for the consoles, then again for the PC, just to get access to DLSS? Or would you rather use RIS of Fidelity FX from the begining?
  • Dizoja86 - Friday, June 4, 2021 - link

    Developers wouldn't need to "build the game again" just to implement DLSS. It'll likely be far easier to implement DLSS as well as FSR than to spend time doing low level tweaking for Nvidia cards. PC gamers would likely have the option of using either FSR on/off or DLSS on/off depending on their hardware and image quality requirements.
  • mode_13h - Saturday, June 5, 2021 - link

    I can't speak to XBox, but Playstation has its own graphics API. Porting between Playstation and anything else is either done using a portable engine or else you have to rewrite the graphics code. So, there wouldn't seem to be much to gain by Sony using AMD GPUs.

    Also, most console ports are done outsourced to porting houses that specialize in that sort of thing.
  • PaulSimon - Tuesday, June 1, 2021 - link

    This can't compete with DLSS at all in its current form. Maybe you were joking?
  • SaturnusDK - Tuesday, June 1, 2021 - link

    You have to be seriously delusional if you consider DLSS to be anything but an utter failure. Sure, it works great on a few high cost GPUs and in a limited number of games. And that's why it's total garbage.

    FSR isn't a competitor to DLSS. It's its successor. FSR is what consumers and the industry wants, a open standard that works on all modern GPUs including consoles, and in principle in all major game engines, and thereby in almost all games, from day one. AMD proved this point by demonstrating it working on an 1060. Sure, it might not be great looking in the demo but that doesn't matter at all. They probably put minimal effort into optimising it for their competitors products right now, and is with high certainty concentrating on getting the best possible performance on their own products first.

    What the 1060 demo did show was that FSR works on non-DLSS capable hardware that it hasn't even been optimised for, and has the potential to move a game from being unplayable to playable on older and/or less capable hardware. That's what matters to the vast majority of consumers.

    Only a tiny minority of consumers gives a rats arse about DLSS being able to get you slightly better ray traced performance when the vast majority of them aren't buying products that even has the ability to use the technology. They care about improving the performance of their potato workhorse. And that's what FSR offers.
  • pipyakas - Wednesday, June 2, 2021 - link

    You can improve the performance of any potato workhorse by lowering the resolution, which is what AMD is doing here. And if that's the "successor" to a AI-driven, temporal-aware upscaler, I'm happy that technology is going backwards 40 years
  • tamalero - Friday, June 4, 2021 - link

    Thats what DLSS does dude.. It lowers resolution then uses AI to fill the stuff with sharpeners and other shit.
  • mode_13h - Sunday, June 6, 2021 - link

    > uses AI to fill the stuff with sharpeners and other shit.

    " and other shit." LOL

    A big part of it is using TAA-like techniques to combine samples across multiple frames.
  • torginus - Tuesday, June 1, 2021 - link

    Yeah, comparing new tech to an old GPU and not NVidias competing product, doesn't inspire much confidence.
    Also, the fact that it works on a per-frame basis fundamentally means, it cannot work properly. The issue of aliasing is synonymous with undersampling in the world of signal processing, which probably can be only remedied with having more data to work with (either stuff from the previous frames, or rendering at a higher resolution).
    Kinda bummed about this, as I had high hopes that a credible open DLSS competitor would arrive.
  • tamalero - Friday, June 4, 2021 - link

    I do not think you understand what they were trying to archieve. Many people already explained here.
    1060 is the most popular gpu on STEAM, the most sold card ever right now.
    Also its not capable of running DLSS because DLSS is only for TX cards with tensor cores.

    AMD is offering the most wide support of their product. Not only covering their own cards (unlike Nvidia) but also covering older cards (which also nvidia does not do) AND ALSO covering the competing product (something Nvidia never did until their failure of gsync)
  • mode_13h - Sunday, June 6, 2021 - link

    > I do not think you understand what they were trying to archieve.

    I don't think you get @torginus' point, which is that FSR is disappointing, simply on its merits (or lack, thereof). That's a fair and legit point, all on its own. The thing about the legacy GPU is really a side-track, as I'll explain.

    Basically, AMD had few options, due to the lack of deep learning horsepower in their GPUs and probably how heavily Nvidia patented the space. So, AMD pulled something out of their ass that's maybe not entirely unusable, but it's not competitive. Then, they had to face the challenge of how to spin it so it would look like more of a win and give developers a reason to waste any time on it. And *that's* where they probably got the idea of supporting it on legacy Nvidia hardware.
  • euskalzabe - Saturday, June 5, 2021 - link

    You are here… and the point of that demo is so far away you don’t even see it.
  • edzieba - Tuesday, June 1, 2021 - link

    With the obvious resampling going on in that image... is 'AMD FSR' just literally rendering at a lower resolution and classically up-sampling, with a fancy brand name slapped on to pretend they've actually done anything?
  • PaulSimon - Tuesday, June 1, 2021 - link

    I mean yes, it's literally upsampling.
  • Yosar - Tuesday, June 1, 2021 - link

    And DLSS is not upsampling?
    It just magically created not upsampled image by Jensen himself?
  • edzieba - Wednesday, June 2, 2021 - link

    It 'magically' creates HF signal from LF signal through a pre-trained neural network. Without that neural network, you're left with classical scaling techniques, i.e. what we've been using for decades. There's no magic to, say, a Lanczos kernel that applies the same 2D transform across the frame.
  • mode_13h - Friday, June 4, 2021 - link

    Don't forget that DLSS 2.0 also has temporal information. When there's a little camera motion, super-resolution techniques are very effective at using it to produce a higher-res, less noisy output.
  • del42sa - Tuesday, June 1, 2021 - link

    AMD DLSS 1.0 ...
  • mode_13h - Tuesday, June 1, 2021 - link

    The problem is that AMD doesn't have the same level of computational horsepower as what Nvidia is using for DLSS. So, the potential for 2.0 being similarly improved is much less.

    Also, Nvidia certainly patented the heck out of DLSS, which is going to block AMD from coming anywhere close to replicating it, even if they did have similar compute resources.
  • Cooe - Tuesday, June 1, 2021 - link

    The FSR demos running on AMD cards upscaling to 4K looked way, WAAAAAAAY better than the Nvidia GTX 1060 demo. Meethinks it either performs worse on Nvidia hardware outside sheer raw performance OR it doesn't play nice with lower resolutions. My guess is it's mostly the former with a bit of the latter.
  • lightningz71 - Tuesday, June 1, 2021 - link

    I'm more interested in what this can do for APUs. APUs are heavily hamstrung by memory performance and limited silicon area for the gpu logic. If this can work, and work well, on the U series APUs, allowing them to render in full detail at 720p, where they seem to do just fine, and then properly upsample to 1080p while realizing playable frame rates with solid quality, this is a big win for mobile.
  • brucethemoose - Tuesday, June 1, 2021 - link

    This same problem exists in image/video upscaling land.

    Take a network built for images, run it on every frame of a video, and it looks great... until you watch it in motion and see the flickering. This isn't necessarily a problem with "traditional" upscalers and processors, as they're more deterministic.

    Also, Pascal was the first to support FP16, IIRC. Maybe thats the minimum required for support? And I think AMD FP16 support goes pretty far back.
  • mode_13h - Tuesday, June 1, 2021 - link

    > Pascal was the first to support FP16

    Only on the P100. However, what the gaming GPUs had was 4x int8 dot product. I think Turing has both, but then it has tensor cores which are a huge leap beyond simple "packed-math" (that's what AMD calls it).

    AMD added packed fp16 in Vega, and then packed int8 in Radeon VII (but with a very limited set of operations). RDNA really fleshed out their contingent of packed-math operations. Still, it's nothing like Nvidia's Tensor cores.

    AMD has apparently challenged Tensor cores with their own "Matrix Cores", in their CDNA-based MI100 (Arcturus), but that has no graphics units. So, it'll be interesting to see if they bring Matrix Cores to RDNA3.
  • evolucion8 - Wednesday, June 2, 2021 - link

    Not really, AMD was first with GCN 1.2 aka Tonga aka Radeon HD 285 and 380X, it supported FP16 back in 2014.
  • mode_13h - Saturday, June 5, 2021 - link

    > Not really, AMD was first with GCN 1.2 aka Tonga aka Radeon HD 285 and 380X

    Only as a sort of technicality, but not in any way relevant to the discussion at hand. In GCN 1.2, 16-bit operations were added in what was described as a power-saving move, occupying only half of a shader/SIMD lane.

    https://www.anandtech.com/show/8460/amd-radeon-r9-...

    It wasn't until Vega that AMD decided to pack two fp16 values in a single lane, which they termed "Rapid Packed Math".

    https://www.anandtech.com/show/11717/the-amd-radeo...

    Intel was actually first to do this, in their Gen8 HD graphics GPUs. Those shipped in Broadwell CPUs, back in 2015.
  • mode_13h - Saturday, June 5, 2021 - link

    If you really want to talk about the inclusion of fp16 in GPUs, for its own sake, it happened long before. This is dated 2002:

    https://www.khronos.org/registry/OpenGL/extensions...

    And I assume its use in GPUs is what motivated its inclusion in IEEE 754-2008.
  • Ozzypuff - Tuesday, June 1, 2021 - link

    Ok sounds awesome but will my athlon 3000G run Genshin Impact better?
  • mode_13h - Tuesday, June 1, 2021 - link

    Wow, it appears the game they're using already has some sort of TAA. Zoom in on the lamp post, in the "off" side, and look at the high-frequency aliasing artifacts around it.

    I guess it could also be a compression artifact, depending on how this frame was captured.
  • maroon1 - Wednesday, June 2, 2021 - link

    "this slide does not appear to have FSR applied; it’s just a fancy background for the performance data"

    Haha. FSR is garbage
  • blppt - Wednesday, June 2, 2021 - link

    One wonders why AMD didn't develop their own tensor core equivalent for Big Navi. The 6900XT is a beast (if you can find one), but is absolutely crippled by not having the 3900's DLSS 2.0 advantage in graphics-intensive AAA/RT games running at 4k.
  • evolucion8 - Wednesday, June 2, 2021 - link

    Not really, those Tensor cores they just do the image inferencing to match the reference image done with the parameters in their server farms, AMD can do that in shaders IF they spared to have some bubbles in the execution resources which is doubtful considering how effect AMD is this time in the execution resources. The issue with AMD's RT performance is not really the ray casting itself but the shading and denoising which is done by the CU and TMUs instead of using a fixed function engine. Kinda like RT has four steps, AMD's implementation accelerate the first two steps but not the last two steps, nVidia's approach is fully hardware accelerated from top to bottom but lacks of flexibility, in the end, depending of what's doing, if AMD's implementation is done correctly, they can be close in performance while having huge die size saving costs, one good example, Metro Exodus Enhanced, one of the most hammering implementation of Ray Tracing and yet, nVidia is ahead by 20% average, yet other lame implementations like Shadow of the Tomb Raider the gap is much wider.
  • mode_13h - Saturday, June 5, 2021 - link

    > those Tensor cores they just do the image inferencing to match the reference image
    > done with the parameters in their server farms

    LOL. Sounds like someone explained deep learning to you once, and you're just trying to regurgitate what you remember them saying.

    Tensor cores are generic matrix-multiply pipelines, and can be used for anything, although the data types they support do limit their applicability.

    > AMD can do that in shaders

    You don't get it. What makes tensor cores so potent is the optimization in data-movement (as well as the instruction-density). The tensor cores are just different pipelines, fed by the same shader cores. So, the fact that they achieve such astonishingly higher performance than regular shader arithmetic should tell you something.

    > instead of using a fixed function engine.

    They're not fixed-function, any more than other shader ALU operations.

    > nVidia's approach is fully hardware accelerated from top to bottom but lacks of flexibility

    Where does it lack flexibility, and why are you suddenly talking about ray tracing performance in a thread about FSR?
  • mode_13h - Saturday, June 5, 2021 - link

    If you really want to understand Tensor cores, read this:

    https://www.anandtech.com/show/12673/titan-v-deep-...
  • mode_13h - Saturday, June 5, 2021 - link

    MI100 (Arcturus) has "Matrix cores". Too bad Anandtech skipped reviewing that "GPU", although it's even less of a graphics processor than typical headless server GPUs, as it has no raster or texture units.

    https://www.amd.com/en/products/server-accelerator...

    We can only hope they'll see fit to include some of this stuff in RDNA3.
  • Zoeyoliver - Friday, June 4, 2021 - link

    Wow, it appears the sport they're using already has some kind of TAA. concentrate on the lamp post, on the "off" side, and appearance at the high-frequency aliasing artifacts around it.

    I guess it could even be a compression artifact, counting on how this frame was captured.https://terrariablog.com/">for more details
  • mode_13h - Saturday, June 5, 2021 - link

    Evil spammer ripped off *my* post, changing the word "game" to "sport" and adding their link to the end.
  • Sekiberius - Thursday, June 10, 2021 - link

    God, the 2nd image as blurry as hell, probably trying to hide the fact that the image is blurred and looks like it's rendered at 720p.
  • flyingpants265 - Friday, June 11, 2021 - link

    Can someone explain why this matters at all? Fancy upscaling allows you to run at higher resolution with higher performance, and very little drop in image quality. So it's basically equivalent to a performance increase.

    Only it's not free, because once it becomes an official standard, the cards will be designed with this in mind. Soon enough they'll just start pricing the cards 20% higher.. that or lower their performance targets by 20%, because they can get that extra 20% just by using the upscaling. So ultimately it's a wash, it's the same performance in the end, except you end up with an upscaled image.

    Feel free to correct me.
  • flyingpants265 - Friday, June 11, 2021 - link

    Same price vs. performance, also.

Log in

Don't have an account? Sign up now