But still, a site of this caliber. I. Just. Don't get it. Their database is still my to go to for comparing new cards with the on older ones. Sad, hope Anandtech gets it together, else I'm afraid it will just disperse in time
Seriously, fuck nVidia and AMD if that is true. AMD and Intel will give them engineering samples of CPU's and a confidentiality launch date, but they won't supply them RETAIL products for review over Youtubers? I don't want to see some dude play games comparing cards with the FPS in the corner. I want charts.
We're still getting sampled. I have the 6500 XT in house, for example. I've been running some tests on it.
However it was not possible to get a review assembled. I do not expect to resume reviews until the Arc launch, which will also be when the 2022 benchmark suite and data sets are launched.
Respectfully, that's all fine and good, but AT has yet to review any RTX 3k series card. Are we going to get a single card, the XT6500, and Intel's stuff, or will the entire missing generation of cards from both NV and AMD show up at AT?
Thank you Ryan, I really hope this ends up happening. I know there are other sites out there but I’ve been using AT as my go-to for almost 20 years and I’d love to avoid branching out :)
Are you kidding me?! Are you telling us right here we are back on track as of 2022?! That's great news Ryan! Take care and keep up the good work! I feel I have to tell the world about this! You made med happy! :) Again, thanks for reaching out!
Agreed. I could understand making it slow, but they really took things to the extreme. Every single aspect of this card is deliberatly cut. "And even the video encode/decode blocks have been scaled down, with AMD taking out the encode block and removing AV1 from the decode block."
This GPU seems to have been designed to compete against a future Nvidia MX550. So, as a laptop optimized GPU, they probably figured any media decoding would happen on the integrated graphics anyway, so why not save the space and increase yield?
This was probably also the reasoning for the PCIe 4.0 x4. OEMs would only ever pair this with a Rembrandt APU or Alder Lake CPU, so you might as well save on cost and power consumption.
The short answer is cost optimization/keeping the die size down.
There's currently very little usage of AV1. According to AMD, they did their research and didn't see very many customers using that functionality, so it would have been a waste of die space for most users.
And of course, there will be a number of customers whose machines will be able to decode AV1 in another fashion (e.g. an APU). The bulk of sales for video cards are for new machines, especially entry-level video cards.
Well, no, people aren't using AV1 now because hardware decode isn't available for most people - and encode, until very recently, has been mega slow. Chicken and egg problem. But, you know, that's probably OK. At the end of the day, it is a reduced chip. I imagine this also limits potential liability to any AV1 patent claims, for systems for which there would be little profit to start with.
For some reason, I was under the impression that decode blocks can only be used if the hardware controls the display, but perhaps this is no longer the case? Or maybe I'm thinking QuickSync encode?
The particular problem with Intel iGPUs is that on many platforms, installing a dGPU will cause the iGPU to be completely disabled.
There's a few boards where this isn't a case - even back in the Sandy/Ivy Bridge days you could see options for "Lucid Virtu" which let you leave the iGPU enabled and use its Quick Sync encoder even if it wasn't the primary GPU. (Newer boards sometimes still have similar features, without the Lucid branding)
The primary context here is in notebooks, although every board I've used in the last 5 years or so has had the ability to run the iGPU at the same time as an add-in board - at most a BIOS change is required.
> The particular problem with Intel iGPUs is that on many platforms, > installing a dGPU will cause the iGPU to be completely disabled.
No, I'm not sure about that. The board will be configured to boot of one or the other, but the rest of the GPUs (including the iGPU) should remain visible, when software enumerates them.
Tricking uniformed buyers with a deceptive name (6500 XT), a deceptive interface (4x PCI-e 3), and useless ray tracing performance -- it's about more than price. It's about consumers standing up to the next iteration of the FX 9590.
The FX 9590 was terrible because it was purporting to compete with products it couldn't touch and blew an exorbitant amount of power to do so. I agree that this card should be named something like RX 6400, but whichever way you slice it it's only claiming to be the lowest-end member of AMD's GPU family on desktop, and it is precisely that.
" It uses ray tracing to sell to buyers even though AMD didn't give it enough ray tracing performance. " the SAME thing could be said for the RTX 20 series less the top end 2070s and the 2080s, heck, even the low end RTX 3070s and lower can struggle with RT in some games.
nope. i am comparing the low end performance with what it does for games. rtx line to itself, and the radeon line to its self. when you compare rtx to radeon, most, if not all will agree RT on RTX is the better option. but no one i know that is even interested in RT would even consider it on the mid range and lower cards. they would all get the higher end 3070s and higher, as well as the 6800xt and 6900xt lines, and maybe higher end 6800s.
Digging hard for a ray of hope in this product announcement, AMD managed to get the clock rate way up there. That bodes well for RDNA3. I am a little surprised/disappointed by the power use. At 107 W, the RX 6500 XT will require an auxiliary power connector. And the RX 6400 will allegedly be 53 W. That leaves a gap for "as much graphics as you can get for 75 W" unless there is an RX 6400 XT in the works for later.
It seems like they pushed the clocks (and thus TDP) higher to bring it closer to being in competition with the 3050. It's a shame - would have been nicer at 75W with whatever performance that gave. OEMs could always bring out models with extra power connectors and unlocked clocks.
I wish someone would do a 75 W version of this card by cutting the shader clocks, but I think they wouldn't be allowed to call it "RX 6500 XT". Maybe also make it half-height, half-length. Or full-height, but single-slot.
Maybe not normal AIBs, but we have seen 50W Chinese noname RX460 cards. So I would not be surprised to see noname, or OEM sub-75W variants of 6500XT too.
They've managed to reduce transistor budget by 7% compared to Polaris and perform better at ≤1080p on PCIe4, but they're 2 node shrinks ahead, have a 4 yr architectural lead over GCN4, had to give up multiple features, and get outpaced in 1440p.
I hope the production volume makes this all worth it
Polaris is an interesting comparison, as they've given up some features but added some others (infinity cache and RT cores are most notable). It's a much more resource-efficient proposition for a market still dominated by 1080p displays - very much a card for this peculiar moment in time.
AFAIK it's not really 2 node shrinks - more like 1.5. N7 to N6 is an advantage but only really in area, and not a lot there.
Production volume is the only reason this design exists. AMD committed to a low-size high-volume design, which is something Nvidia haven't done since GP108 - everything they've released into the market since then has been a recycle of GP108 or a cut-down version of something larger. I haven't seen a solid die size measurement of GA107 but apparently it's around 190mm^2 , and Nvidia's official competitor for this on desktop is based on GA106 at 276mm^2.
This could end up being the worst card of the decade. Same performance and price as RX 580 five years ago. And that is just talking about MSRP, with the real price being 50-80% higher.
Look at the up-side, if the GPU market somehow corrected to how it used to be, this would suddenly become a < $100 card that's still a good bit faster than iGPUs.
Also, comparing MSRP of a card 5 years ago is ignoring inflation.
Inflation? 5 years ago you could get a 4 core 4 thread CPU for ~$180. Now you can get 6 cores 12 threads for the same price. That is more than double the performance with clock speed and IPC improvements. RX 580 had a die size of 232 mm2, on a fairly new 14 nm process. 6500 XT has a die size of 107 mm2 on the newest 6 nm process. That is almost microscopic. This card literally could not be any more cut down. It is a useless, garbage product with a ridiculous price. Horrendous.
The RX 580 was a mid-range card, and the 6500 XT is low-end. Wouldn't it be more appropriate to compare to low-end Polaris cards like the RX 550 or RX 560? Those cards had areas of 101 and 123 mm2. They cost $79 and $99. The die size and anemic performance are business as usual for AMD's low end, the only difference is the price has doubled.
High prices are no fun for consumers, but it's not AMD's fault the GPU market is crazy right now. AMD has set the price correctly for current market conditions. After all they are not a charity. There are plenty of people willing to spend $199 for a low-end card, and AMD wants to do business with them.
"More than one game" here meaning "two out of twelve", unless you count the margin-of-error difference of 2fps in Resi 2 as a win. In both of those cases - Rainbow Six Siege and F1 2021 - the minimum frame-rate is above 60fps at high details, so the difference is academic for the class of user this card is aimed at.
RX 570s are selling for between £200 and £280 second-hand on eBay in the UK. RX 6500 XT cards start around £230. If I were building a new machine, I'd definitely pick the latter (unless it were based on a PCIe 3.0 board / CPU).
In the UK at least, an RX 580 that has been used for the past 5 years will cost you more than a brand-new RX 6500 XT. It's not really fair to compare the prices now and then when the prices now are absurd for literally every option available, except in the sense to point out that all GPUs are currently overpriced.
> On no other processor will you find the L3 cache bandwidth be added to the > DRAM bandwidth – even in designs where the L3 cache is exclusive of the DRAM.
Ryan, you're thinking in a CPU mindset. GPUs could do some amount of non-temporal reads or writes, in which the infinity cache is bypassed. In fact, this *must* be the case, otherwise VRAM access would be completely bottlenecked on the infinity cache and they *certainly* wouldn't have spent the extra cost on the latest and greatest GDDR6 memory.
Now, I'll grant you that if AMD wants us to take that consideration into account, they should also disclose what % of memory operations bypass the infinity cache. To your point, the two won't be perfectly additive, but I'll bet the aggregate bandwidth is indeed higher than the GDDR6 throughput, in actual practice.
> a tiny local cache that’s 1/256th the size of the VRAM pool
That's actually a better ratio than we see in a lot of desktop CPUs. Granted, CPUs tend to have better access locality, but if the infinity-cache is being used judiciously, then it might have a higher hit-rate than we'd expect.
Thanks for the feedback, mode_13h. The point I'm ultimately trying to make is that I consider it absurd to include the L3 cache in bandwidth figures. It's not equivalent to the VRAM pool, if nothing else because it's far too small. Cache is not main memory; it's cache.
It's cache, but it's holding the most accessed buffers, the vast majority of reads and writes in a typical game engien will be using the cache (as long as the render resolution is reasonable and the texture res is not high). It certainly isn't 1/256th, it might even be 25-50%, although we'd have to ask a game engine developer for a best estimate.
This is true, but I still agree with Ryan that it's not a great idea to treat the bandwidth as purely additive for promotional reasons. It's marketing silliness.
Perhaps it's better to think of it like two different forms of storage. Our website uses an SSD for a ~50GB database and small files, and it gets a heck of a lot of reads and writes in comparison to our 4TB of content on the HDD array. If we used HDD for the DB - and we tried it once - the latency for those thousands of requests would be killer, even if the total bandwidth of each channel is similar.
Or if you want equivalent data, consider swap on an SSD vs. HDD - you can have both set up, the SSD at a higher priority, and if data is needed from both, then both bandwidths are in play at once - it doesn't have to go into the SSD first. Like CPU cache was set in cache-as-RAM mode.
A separate portion of the SSD might be set up as bcache (similar to Rapid Storage Technology) and in that mode it might act more like you expect a CPU cache to work. Even then, for sequential I/O it goes direct to the HDD, so as not to flush the cache out.
Don't buy this card for a PCIe 3.0 system unless it's solely for games which won't exceed the 4GB VRAM or some miracle has has been proven in testing for this generation. You can get away with exceeding 4GB VRAM when on PCIe 4.0 4x link with negligible impact. You can get away with a PCIe 3.0 4x link if you don't exceed the 4GB VRAM. Exceeding the 4GB VRAM on PCIe 3.0 4x tanks performance... around half the performance in some cases; if you follow Hardware Unboxed's mock-up with last gen hardware of similar performance, comparing 4GB and 8GB cards.
So Gamers Nexus have now tested an actual RX 6500 XT and the games using more than 4GB VRAM had ~15% hit on PCIe 3.0 so it's not like AMD didn't think about the problem... Takes the card to somewhere around the performance of a GTX 1060.
I'll grant that the x4 PCIe interface was a poor decision. It seemed to be made in the same mindset as other painful cuts - to keep the die size & thus the price at a bare minimum.
AMD did a good thing. Came on the maket with a card with 4Gb RAM that cannot go in the hands of Miners. Also this card plays very well Roblox and Fortnite so you imagine where I am seeing a huge market unexpolored.
Besides that more cards in the market in that segment lowers pressure on high end, and for less demand you will have lower prices.
They could do anytime an 4GB hardware locked 6600 / 6600xt that could sell for 300 USD and play awesome 1080p.
Do not forget the mass market is 1080p , only a few of us are "that passionate" to have more than that.
And WHERE is the Review. I also miss the young and pasionate Anand. 20 years have passed ... and ...it's too bad.
You forgot the kneecapping of the PCI-e 3 interface to just 4x.
You didn't consider the owners of 4GB Fiji cards, who haven't been getting driver support from AMD since July -- in the midst of the GPU crisis.
You forgot that this thing has ray tracing support but not PCI-e 3 16x speed. The absurdity of that is profound.
You also didn't consider that AMD is using precious fab wafers to make this trash, making it even more difficult to buy something decent at a decent price.
Fiji cards didn't just magically break. I'm running a Pitcairn card in a backup system and it still works fine. It can even do FSR.
"You also didn't consider that AMD is using precious fab wafers to make this trash, making it even more difficult to buy something decent at a decent price." Accounting for yield, you get about 3x as many of these for a wafer as for the next-largest chip, Navi 23. We already know AMD can't get enough of its larger chips out to meet demand. It's preposterous to suppose that producing a larger quantity of chips for the least-served segment of the market will somehow make supply issues worse. You're full of it.
Fascinating analysis. Meanwhile, fan control is no longer working in Windows 10. Driver support may not matter to you but it matters to anyone who has some sense. Games that have serious bugs will generally not be fixed when driver support is dropped. Operating systems are constantly being changed by MS.
"You're full of it."
Full of sense, yes. You, meanwhile, are trying to justify wasting fab capability to push a chip with a 4x PCI-e interface.
It runs at 4x in PCI-e 3 boards. That causes performance to plummet around 25% to below that of the Polaris 570 of yore, in more than one game. You can see the data at TechSpot.
doesnt matter, it will run on PCIe 3, ( heck might even work on PCIe 2) Oxford guy, this is a PCIe ver 4.0 card, with a 4x electrical connection, look at the specs. Bus Interface : PCIe 4.0 x4 as seen here : https://www.techpowerup.com/gpu-specs/radeon-rx-65... even this very article, says " And a PCIe 4.0 x4 bus. "
Shameful specs for abusive price, not much better than the IGP on my 2020's laptop. Also LTT review already showed how bad this card is compared to 5 year old GPUs.
"not much better than the IGP on my 2020's laptop" There's not a single iGPU for which this is even close to being true.
You can buy a 5-year-old card second-hand for the same price (or higher) than this brand-new, or you can buy this. Neither are great options, but it's an option where there were none until now.
Micro Center's site indicates they have these (in-store only), for $240 - $270 depending on the model. Not exactly MSRP, but less of a markup than any other card. A better deal than the closest alternative, an RX 560 for $200, and both cheaper and higher-performing than the 1050 Ti that has been the highest-end-in-stock option for ages.
I see little reason to switch to it from my RX480, but maybe this is the start of the market returning to normalcy. Would be great to see crypcocurrency crash, too.
It depends on the game, really. Some games really aren't PCIe-bottlenecked, on such a GPU. And if you're comparing to the GTX 1050 Ti, this has about 2.5x the amount of TFLOPS and about 50% more GDDR memory bandwidth. It will typically be the better-performing card.
'likely to be a welcome relief for the capacity-constrained discrete video card market'
It's a relief to do the time warp again. Of course, was AMD trying to foist PCI-e 3.0 cards running at 4x speed in the past or is this a new amazing innovation from such a dear friend?
It's very interesting how they neutered this card down to a competitive price, keeping most of the relevant features (almost nobody buying a card in this class will need more than 2 video outputs, etc)
But a lot of these low-end $200 cards, for years, have failed to really compete with cards that are now nearly a decade old. It's 2022, and this card isn't any faster than my GTX970 from 2014. Basically all it has going for it is using 50 watts less power...
It's a fair bit faster than a GTX 970 unless you're using it on PCIe 3.0 - and even then, generally speaking, still faster. Unfortunately prices seem to be going upwards - and value-for-money going backwards - for a while now.
Can't really believe all the shade being tossed at the Radeon RX 6500 XT. It is essentially the low-end of discreet 1080p gaming -- from a 107mm2 die with Radeon RX 580/GeForce GTX 1650 Super performance. Not asking for much, huh?
I suspect Navi 24 and upcoming variants will become the AMD 'Lexa' of old 'Arctic Islands' arch. I bet yields got to the point they were harvesting 375+ dies per wafer. It was their workhorse.
I bet this bodes well for the next round APU graphic engines, too. Let Dr Su do her voodoo on the 12 CU RX 6300M ... with Infinity Cache? buhhhh-BOOM! goes that APU
Actually, 13, I didn't think about a graphic chiplet -- though, that would be something. I had my engineer hat on in that a cut-/stripped down RX 6300m would land in the current RX 550-560 performance range. That could boost the APU Vega graphic engine 30- to 40% with 'RDNA 2.2'
And thinking like that AMD engineer, the shader cores, fancy L3 Cache, Display/Video Core Next decoders, etc, are essentially the same masks for the 6300m for the new APU. No muss - No fuss
Yeah, I mean Infinity Cache directly tackles one of the main performance bottlenecks of APUs. If AMD can get a good boost out of a mere 16 MiB, then it's likely on its way to an APU near you!
Plus, when you consider the bandwidth they got out of the Infinity Cache on this GPU, that's pretty good if it were in an APU. Certainly an improvement over regular 4x32-bit DDR5.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
101 Comments
Back to Article
shabby - Wednesday, January 19, 2022 - link
Rip gpu reviews @ AnandTech 1997-2019, you'll be missed!velanapontinha - Wednesday, January 19, 2022 - link
amenKurosaki - Wednesday, January 19, 2022 - link
Anyone know what happened? This was my to go to site for GPU data. Makes me sad.Unashamed_unoriginal_username_x86 - Wednesday, January 19, 2022 - link
From what I recall hearing from Ryan they stopped getting sampled. He was also effected by Cali wildfiresKurosaki - Wednesday, January 19, 2022 - link
But, stopped getting sampled? They were the best techsite in the world. How could it be?WJMazepas - Wednesday, January 19, 2022 - link
Probably AMD and Nvidia are focusing on the youtubers reviewersKurosaki - Wednesday, January 19, 2022 - link
But still, a site of this caliber. I. Just. Don't get it. Their database is still my to go to for comparing new cards with the on older ones. Sad, hope Anandtech gets it together, else I'm afraid it will just disperse in timeSamus - Friday, January 21, 2022 - link
Seriously, fuck nVidia and AMD if that is true. AMD and Intel will give them engineering samples of CPU's and a confidentiality launch date, but they won't supply them RETAIL products for review over Youtubers? I don't want to see some dude play games comparing cards with the FPS in the corner. I want charts.at_clucks - Friday, January 21, 2022 - link
"Youtubers" also covers Gamers Nexus: https://www.youtube.com/watch?v=ZFpuJqx9QmwBut there are plenty of reviewers who found a way to review this card:
https://www.tomshardware.com/reviews/amd-radeon-rx...
https://www.pcgamer.com/amd-radeon-rx-6500-xt-revi...
https://www.techspot.com/review/2398-amd-radeon-65...
mode_13h - Wednesday, January 19, 2022 - link
Maybe if you stop reviewing the cards they *do* send, they catch on and stop sending you cards. Just a guess.shabby - Wednesday, January 19, 2022 - link
The wildfires Ryan mentioned but the "we're not being sampled" is a new one. Why he won't update the site on the situation is a mystery.shabby - Wednesday, January 19, 2022 - link
I also wonder whose going to do smartphone reviews.Sunrise089 - Thursday, January 20, 2022 - link
Oh no I didn’t even realize he had left :( Huge shoes to fill with respect to the deep CPU analysis side.Ryan Smith - Wednesday, January 19, 2022 - link
We're still getting sampled. I have the 6500 XT in house, for example. I've been running some tests on it.However it was not possible to get a review assembled. I do not expect to resume reviews until the Arc launch, which will also be when the 2022 benchmark suite and data sets are launched.
catavalon21 - Wednesday, January 19, 2022 - link
Respectfully, that's all fine and good, but AT has yet to review any RTX 3k series card. Are we going to get a single card, the XT6500, and Intel's stuff, or will the entire missing generation of cards from both NV and AMD show up at AT?Ryan Smith - Wednesday, January 19, 2022 - link
Those additional cards would show up on an as-needed basis when they're benchmarked on the 2022 suite.Mr Perfect - Thursday, January 20, 2022 - link
Now that is exciting. Will you be doing architecture deep dives too? Nobody else really does those well.Sunrise089 - Thursday, January 20, 2022 - link
Thank you Ryan, I really hope this ends up happening. I know there are other sites out there but I’ve been using AT as my go-to for almost 20 years and I’d love to avoid branching out :)Kurosaki - Friday, January 21, 2022 - link
Are you kidding me?! Are you telling us right here we are back on track as of 2022?! That's great news Ryan! Take care and keep up the good work! I feel I have to tell the world about this! You made med happy! :)Again, thanks for reaching out!
nandnandnand - Wednesday, January 19, 2022 - link
Here's your review: the 6500 XT is one of the worst desktop GPU launches ever, but it doesn't matter, only the price matters.Soulkeeper - Wednesday, January 19, 2022 - link
Agreed. I could understand making it slow, but they really took things to the extreme.Every single aspect of this card is deliberatly cut.
"And even the video encode/decode blocks have been scaled down, with AMD taking out the encode block and removing AV1 from the decode block."
heffeque - Wednesday, January 19, 2022 - link
What the heck? They even took out AV1 decode support? Wh... why?vlad42 - Wednesday, January 19, 2022 - link
This GPU seems to have been designed to compete against a future Nvidia MX550. So, as a laptop optimized GPU, they probably figured any media decoding would happen on the integrated graphics anyway, so why not save the space and increase yield?This was probably also the reasoning for the PCIe 4.0 x4. OEMs would only ever pair this with a Rembrandt APU or Alder Lake CPU, so you might as well save on cost and power consumption.
Ryan Smith - Wednesday, January 19, 2022 - link
The short answer is cost optimization/keeping the die size down.There's currently very little usage of AV1. According to AMD, they did their research and didn't see very many customers using that functionality, so it would have been a waste of die space for most users.
And of course, there will be a number of customers whose machines will be able to decode AV1 in another fashion (e.g. an APU). The bulk of sales for video cards are for new machines, especially entry-level video cards.
GreenReaper - Thursday, January 20, 2022 - link
Well, no, people aren't using AV1 now because hardware decode isn't available for most people - and encode, until very recently, has been mega slow. Chicken and egg problem. But, you know, that's probably OK. At the end of the day, it is a reduced chip. I imagine this also limits potential liability to any AV1 patent claims, for systems for which there would be little profit to start with.For some reason, I was under the impression that decode blocks can only be used if the hardware controls the display, but perhaps this is no longer the case? Or maybe I'm thinking QuickSync encode?
kepstin - Thursday, January 20, 2022 - link
The particular problem with Intel iGPUs is that on many platforms, installing a dGPU will cause the iGPU to be completely disabled.There's a few boards where this isn't a case - even back in the Sandy/Ivy Bridge days you could see options for "Lucid Virtu" which let you leave the iGPU enabled and use its Quick Sync encoder even if it wasn't the primary GPU. (Newer boards sometimes still have similar features, without the Lucid branding)
Spunjji - Friday, January 21, 2022 - link
The primary context here is in notebooks, although every board I've used in the last 5 years or so has had the ability to run the iGPU at the same time as an add-in board - at most a BIOS change is required.mode_13h - Saturday, January 22, 2022 - link
> The particular problem with Intel iGPUs is that on many platforms,> installing a dGPU will cause the iGPU to be completely disabled.
No, I'm not sure about that. The board will be configured to boot of one or the other, but the rest of the GPUs (including the iGPU) should remain visible, when software enumerates them.
RSAUser - Wednesday, February 9, 2022 - link
iGPU being disabled hasn't been true for a number of years now on Windows 10 AFAIK. I definitely see it get used when I do some QuickSync encoding.Oxford Guy - Friday, January 21, 2022 - link
Tricking uniformed buyers with a deceptive name (6500 XT), a deceptive interface (4x PCI-e 3), and useless ray tracing performance -- it's about more than price. It's about consumers standing up to the next iteration of the FX 9590.Spunjji - Friday, January 21, 2022 - link
The FX 9590 was terrible because it was purporting to compete with products it couldn't touch and blew an exorbitant amount of power to do so. I agree that this card should be named something like RX 6400, but whichever way you slice it it's only claiming to be the lowest-end member of AMD's GPU family on desktop, and it is precisely that.Oxford Guy - Friday, January 21, 2022 - link
It is precisely scammy.It uses ray tracing to sell to buyers even though AMD didn't give it enough ray tracing performance.
It uses a big name to fool buyers.
It uses the expectation of 16X PCI-e 3 support to fool buyers.
Enough with the excuses. They're not going to whitewash this.
Qasar - Sunday, January 23, 2022 - link
" It uses ray tracing to sell to buyers even though AMD didn't give it enough ray tracing performance." the SAME thing could be said for the RTX 20 series less the top end 2070s and the 2080s, heck, even the low end RTX 3070s and lower can struggle with RT in some games.
AshlayW - Tuesday, January 25, 2022 - link
Qasar, are you seriously comparing the RT performance of TU104/GA104 to Navi 24?Are you feeling OK?
Qasar - Saturday, January 29, 2022 - link
nope. i am comparing the low end performance with what it does for games. rtx line to itself, and the radeon line to its self. when you compare rtx to radeon, most, if not all will agree RT on RTX is the better option. but no one i know that is even interested in RT would even consider it on the mid range and lower cards. they would all get the higher end 3070s and higher, as well as the 6800xt and 6900xt lines, and maybe higher end 6800s.CrystalCowboy - Wednesday, January 19, 2022 - link
Digging hard for a ray of hope in this product announcement, AMD managed to get the clock rate way up there. That bodes well for RDNA3.I am a little surprised/disappointed by the power use. At 107 W, the RX 6500 XT will require an auxiliary power connector. And the RX 6400 will allegedly be 53 W. That leaves a gap for "as much graphics as you can get for 75 W" unless there is an RX 6400 XT in the works for later.
Spunjji - Wednesday, January 19, 2022 - link
It seems like they pushed the clocks (and thus TDP) higher to bring it closer to being in competition with the 3050. It's a shame - would have been nicer at 75W with whatever performance that gave. OEMs could always bring out models with extra power connectors and unlocked clocks.mode_13h - Wednesday, January 19, 2022 - link
I wish someone would do a 75 W version of this card by cutting the shader clocks, but I think they wouldn't be allowed to call it "RX 6500 XT". Maybe also make it half-height, half-length. Or full-height, but single-slot.neblogai - Wednesday, January 19, 2022 - link
Maybe not normal AIBs, but we have seen 50W Chinese noname RX460 cards. So I would not be surprised to see noname, or OEM sub-75W variants of 6500XT too.Unashamed_unoriginal_username_x86 - Wednesday, January 19, 2022 - link
They've managed to reduce transistor budget by 7% compared to Polaris and perform better at ≤1080p on PCIe4, but they're 2 node shrinks ahead, have a 4 yr architectural lead over GCN4, had to give up multiple features, and get outpaced in 1440p.I hope the production volume makes this all worth it
Spunjji - Friday, January 21, 2022 - link
Polaris is an interesting comparison, as they've given up some features but added some others (infinity cache and RT cores are most notable). It's a much more resource-efficient proposition for a market still dominated by 1080p displays - very much a card for this peculiar moment in time.AFAIK it's not really 2 node shrinks - more like 1.5. N7 to N6 is an advantage but only really in area, and not a lot there.
Production volume is the only reason this design exists. AMD committed to a low-size high-volume design, which is something Nvidia haven't done since GP108 - everything they've released into the market since then has been a recycle of GP108 or a cut-down version of something larger. I haven't seen a solid die size measurement of GA107 but apparently it's around 190mm^2 , and Nvidia's official competitor for this on desktop is based on GA106 at 276mm^2.
Kurosaki - Wednesday, January 19, 2022 - link
So 300usd gives you subpar 1080p performance today. Still pass.Spunjji - Friday, January 21, 2022 - link
It will until the market gets back to something like normalHarry Lloyd - Wednesday, January 19, 2022 - link
This could end up being the worst card of the decade. Same performance and price as RX 580 five years ago. And that is just talking about MSRP, with the real price being 50-80% higher.mode_13h - Wednesday, January 19, 2022 - link
We don't know what the real price will be.Look at the up-side, if the GPU market somehow corrected to how it used to be, this would suddenly become a < $100 card that's still a good bit faster than iGPUs.
Also, comparing MSRP of a card 5 years ago is ignoring inflation.
Harry Lloyd - Wednesday, January 19, 2022 - link
Inflation? 5 years ago you could get a 4 core 4 thread CPU for ~$180. Now you can get 6 cores 12 threads for the same price. That is more than double the performance with clock speed and IPC improvements.RX 580 had a die size of 232 mm2, on a fairly new 14 nm process. 6500 XT has a die size of 107 mm2 on the newest 6 nm process. That is almost microscopic. This card literally could not be any more cut down. It is a useless, garbage product with a ridiculous price. Horrendous.
foobaz - Wednesday, January 19, 2022 - link
The RX 580 was a mid-range card, and the 6500 XT is low-end. Wouldn't it be more appropriate to compare to low-end Polaris cards like the RX 550 or RX 560? Those cards had areas of 101 and 123 mm2. They cost $79 and $99. The die size and anemic performance are business as usual for AMD's low end, the only difference is the price has doubled.High prices are no fun for consumers, but it's not AMD's fault the GPU market is crazy right now. AMD has set the price correctly for current market conditions. After all they are not a charity. There are plenty of people willing to spend $199 for a low-end card, and AMD wants to do business with them.
Oxford Guy - Friday, January 21, 2022 - link
Releasing a 4x speed PCI-e 3.0 GPU and calling it 6500 XT is crazy and it is absolutely the fault of AMD.Dropping Fiji driver support months ago was also the fault of AMD.
Spunjji - Friday, January 21, 2022 - link
It's a PCIe 4.0 GPU...Bringing up unrelated things you don't like doesn't make for a coherent argument.
Oxford Guy - Friday, January 21, 2022 - link
Are you pulling these ultra-weak rationalizations out of a hat?Qasar - Sunday, January 23, 2022 - link
" Are you pulling these ultra-weak rationalizations out of a hat? "why not you are.
Spunjji - Friday, January 21, 2022 - link
Why not compare it to the previous ~100mm-die mobile-first design instead, Polaris 23, AKA RX 550X Mobile?It looks extremely good when you make that comparison...
haukionkannel - Wednesday, January 19, 2022 - link
Well and this gpu may end up being the best priced gpu on the currect market!It tells more about the current market thatn I care to think of...
Oxford Guy - Friday, January 21, 2022 - link
Tech Spot found this card to have worse performance than a 570 in more than one game.Spunjji - Friday, January 21, 2022 - link
"More than one game" here meaning "two out of twelve", unless you count the margin-of-error difference of 2fps in Resi 2 as a win. In both of those cases - Rainbow Six Siege and F1 2021 - the minimum frame-rate is above 60fps at high details, so the difference is academic for the class of user this card is aimed at.RX 570s are selling for between £200 and £280 second-hand on eBay in the UK. RX 6500 XT cards start around £230. If I were building a new machine, I'd definitely pick the latter (unless it were based on a PCIe 3.0 board / CPU).
Kurosaki - Friday, January 21, 2022 - link
I'm not going to build a new machine until this crazyness has cooled down by about 79%Tams80 - Saturday, January 22, 2022 - link
Yes, but quite a lot of people are and this is a good option for them.Oxford Guy - Tuesday, January 25, 2022 - link
You're making excuses for worse performance than a 570?Spunjji - Friday, January 21, 2022 - link
In the UK at least, an RX 580 that has been used for the past 5 years will cost you more than a brand-new RX 6500 XT. It's not really fair to compare the prices now and then when the prices now are absurd for literally every option available, except in the sense to point out that all GPUs are currently overpriced.mode_13h - Wednesday, January 19, 2022 - link
> On no other processor will you find the L3 cache bandwidth be added to the> DRAM bandwidth – even in designs where the L3 cache is exclusive of the DRAM.
Ryan, you're thinking in a CPU mindset. GPUs could do some amount of non-temporal reads or writes, in which the infinity cache is bypassed. In fact, this *must* be the case, otherwise VRAM access would be completely bottlenecked on the infinity cache and they *certainly* wouldn't have spent the extra cost on the latest and greatest GDDR6 memory.
Now, I'll grant you that if AMD wants us to take that consideration into account, they should also disclose what % of memory operations bypass the infinity cache. To your point, the two won't be perfectly additive, but I'll bet the aggregate bandwidth is indeed higher than the GDDR6 throughput, in actual practice.
> a tiny local cache that’s 1/256th the size of the VRAM pool
That's actually a better ratio than we see in a lot of desktop CPUs. Granted, CPUs tend to have better access locality, but if the infinity-cache is being used judiciously, then it might have a higher hit-rate than we'd expect.
Ryan Smith - Wednesday, January 19, 2022 - link
Thanks for the feedback, mode_13h. The point I'm ultimately trying to make is that I consider it absurd to include the L3 cache in bandwidth figures. It's not equivalent to the VRAM pool, if nothing else because it's far too small. Cache is not main memory; it's cache.psychobriggsy - Wednesday, January 19, 2022 - link
It's cache, but it's holding the most accessed buffers, the vast majority of reads and writes in a typical game engien will be using the cache (as long as the render resolution is reasonable and the texture res is not high). It certainly isn't 1/256th, it might even be 25-50%, although we'd have to ask a game engine developer for a best estimate.Spunjji - Friday, January 21, 2022 - link
This is true, but I still agree with Ryan that it's not a great idea to treat the bandwidth as purely additive for promotional reasons. It's marketing silliness.GreenReaper - Thursday, January 20, 2022 - link
Perhaps it's better to think of it like two different forms of storage. Our website uses an SSD for a ~50GB database and small files, and it gets a heck of a lot of reads and writes in comparison to our 4TB of content on the HDD array. If we used HDD for the DB - and we tried it once - the latency for those thousands of requests would be killer, even if the total bandwidth of each channel is similar.Or if you want equivalent data, consider swap on an SSD vs. HDD - you can have both set up, the SSD at a higher priority, and if data is needed from both, then both bandwidths are in play at once - it doesn't have to go into the SSD first. Like CPU cache was set in cache-as-RAM mode.
A separate portion of the SSD might be set up as bcache (similar to Rapid Storage Technology) and in that mode it might act more like you expect a CPU cache to work. Even then, for sequential I/O it goes direct to the HDD, so as not to flush the cache out.
BushLin - Wednesday, January 19, 2022 - link
Don't buy this card for a PCIe 3.0 system unless it's solely for games which won't exceed the 4GB VRAM or some miracle has has been proven in testing for this generation.You can get away with exceeding 4GB VRAM when on PCIe 4.0 4x link with negligible impact.
You can get away with a PCIe 3.0 4x link if you don't exceed the 4GB VRAM.
Exceeding the 4GB VRAM on PCIe 3.0 4x tanks performance... around half the performance in some cases; if you follow Hardware Unboxed's mock-up with last gen hardware of similar performance, comparing 4GB and 8GB cards.
BushLin - Wednesday, January 19, 2022 - link
So Gamers Nexus have now tested an actual RX 6500 XT and the games using more than 4GB VRAM had ~15% hit on PCIe 3.0 so it's not like AMD didn't think about the problem... Takes the card to somewhere around the performance of a GTX 1060.psychobriggsy - Wednesday, January 19, 2022 - link
Gotta say a 96-bit bus and 6GB would have been preferable, at $199 in 2022.mode_13h - Thursday, January 20, 2022 - link
The 4 GB capacity seems aimed at making it unattractive to Ethereum miners. I think 6 GB is still enough for them.Oxford Guy - Friday, January 21, 2022 - link
And Etherium made AMD kneecap this card with 4x PCI-e 3.0.I'm glad AMD is looking out for gamers so well.
Spunjji - Friday, January 21, 2022 - link
Why do you keep lying about the interface being 3.0?Oxford Guy - Friday, January 21, 2022 - link
Desperation is clouding your ability to comprehend my posts.mode_13h - Saturday, January 22, 2022 - link
I'll grant that the x4 PCIe interface was a poor decision. It seemed to be made in the same mindset as other painful cuts - to keep the die size & thus the price at a bare minimum.Spunjji - Friday, January 21, 2022 - link
Yeah, this is 100% A Bad Buy for people running PCIe 3.0RaduR - Wednesday, January 19, 2022 - link
AMD did a good thing. Came on the maket with a card with 4Gb RAM that cannot go in the hands of Miners. Also this card plays very well Roblox and Fortnite so you imagine where I am seeing a huge market unexpolored.Besides that more cards in the market in that segment lowers pressure on high end, and for less demand you will have lower prices.
They could do anytime an 4GB hardware locked 6600 / 6600xt that could sell for 300 USD and play awesome 1080p.
Do not forget the mass market is 1080p , only a few of us are "that passionate" to have more than that.
And WHERE is the Review. I also miss the young and pasionate Anand. 20 years have passed ... and ...it's too bad.
Oxford Guy - Friday, January 21, 2022 - link
You forgot the kneecapping of the PCI-e 3 interface to just 4x.You didn't consider the owners of 4GB Fiji cards, who haven't been getting driver support from AMD since July -- in the midst of the GPU crisis.
You forgot that this thing has ray tracing support but not PCI-e 3 16x speed. The absurdity of that is profound.
You also didn't consider that AMD is using precious fab wafers to make this trash, making it even more difficult to buy something decent at a decent price.
Spunjji - Friday, January 21, 2022 - link
Fiji cards didn't just magically break. I'm running a Pitcairn card in a backup system and it still works fine. It can even do FSR."You also didn't consider that AMD is using precious fab wafers to make this trash, making it even more difficult to buy something decent at a decent price."
Accounting for yield, you get about 3x as many of these for a wafer as for the next-largest chip, Navi 23. We already know AMD can't get enough of its larger chips out to meet demand. It's preposterous to suppose that producing a larger quantity of chips for the least-served segment of the market will somehow make supply issues worse. You're full of it.
Oxford Guy - Friday, January 21, 2022 - link
'Fiji cards didn't just magically break.'Fascinating analysis. Meanwhile, fan control is no longer working in Windows 10. Driver support may not matter to you but it matters to anyone who has some sense. Games that have serious bugs will generally not be fixed when driver support is dropped. Operating systems are constantly being changed by MS.
"You're full of it."
Full of sense, yes. You, meanwhile, are trying to justify wasting fab capability to push a chip with a 4x PCI-e interface.
PeachNCream - Friday, January 21, 2022 - link
The RX 6500 XT supports 4x PCIe 4.0 not 4x PCIe 3.0Oxford Guy - Friday, January 21, 2022 - link
It runs at 4x in PCI-e 3 boards. That causes performance to plummet around 25% to below that of the Polaris 570 of yore, in more than one game. You can see the data at TechSpot.Qasar - Sunday, January 23, 2022 - link
doesnt matter, it will run on PCIe 3, ( heck might even work on PCIe 2) Oxford guy, this is a PCIe ver 4.0 card, with a 4x electrical connection, look at the specs. Bus Interface : PCIe 4.0 x4 as seen here :https://www.techpowerup.com/gpu-specs/radeon-rx-65...
even this very article, says " And a PCIe 4.0 x4 bus. "
Oxford Guy - Tuesday, January 25, 2022 - link
Why state the obvious and ignore the actual problem?'it will run on PCIe 3'
Very poorly.
Vitor - Wednesday, January 19, 2022 - link
The California wildfire excuse is more dated than the performance of this graphic card.iranterres - Wednesday, January 19, 2022 - link
Shameful specs for abusive price, not much better than the IGP on my 2020's laptop. Also LTT review already showed how bad this card is compared to 5 year old GPUs.mode_13h - Thursday, January 20, 2022 - link
No, it's a lot better than integrated graphics, unless you're using games or settings that don't fit its 4 GB of memory.Spunjji - Friday, January 21, 2022 - link
"not much better than the IGP on my 2020's laptop"There's not a single iGPU for which this is even close to being true.
You can buy a 5-year-old card second-hand for the same price (or higher) than this brand-new, or you can buy this. Neither are great options, but it's an option where there were none until now.
Oxford Guy - Friday, January 21, 2022 - link
We can thank AMD for giving us this junk card as the other option. Thanks AMD!IBM760XL - Thursday, January 20, 2022 - link
Micro Center's site indicates they have these (in-store only), for $240 - $270 depending on the model. Not exactly MSRP, but less of a markup than any other card. A better deal than the closest alternative, an RX 560 for $200, and both cheaper and higher-performing than the 1050 Ti that has been the highest-end-in-stock option for ages.I see little reason to switch to it from my RX480, but maybe this is the start of the market returning to normalcy. Would be great to see crypcocurrency crash, too.
Oxford Guy - Friday, January 21, 2022 - link
If you're on a PCI-e 3 board it's the worst GPU deal I can remember.Spunjji - Friday, January 21, 2022 - link
You've already forgotten the prices the 1050 Ti is going for today? Your memory must be short.Oxford Guy - Friday, January 21, 2022 - link
Prices and GPU design are two different things. This is a worse design.mode_13h - Saturday, January 22, 2022 - link
It depends on the game, really. Some games really aren't PCIe-bottlenecked, on such a GPU. And if you're comparing to the GTX 1050 Ti, this has about 2.5x the amount of TFLOPS and about 50% more GDDR memory bandwidth. It will typically be the better-performing card.Oxford Guy - Tuesday, January 25, 2022 - link
It depends on the design.AMD controlled the design and gave us junk.
Oxford Guy - Friday, January 21, 2022 - link
Slower than cut-down Polaris on PCI-e 3 boards.'likely to be a welcome relief for the capacity-constrained discrete video card market'
It's a relief to do the time warp again. Of course, was AMD trying to foist PCI-e 3.0 cards running at 4x speed in the past or is this a new amazing innovation from such a dear friend?
Samus - Friday, January 21, 2022 - link
It's very interesting how they neutered this card down to a competitive price, keeping most of the relevant features (almost nobody buying a card in this class will need more than 2 video outputs, etc)But a lot of these low-end $200 cards, for years, have failed to really compete with cards that are now nearly a decade old. It's 2022, and this card isn't any faster than my GTX970 from 2014. Basically all it has going for it is using 50 watts less power...
Spunjji - Friday, January 21, 2022 - link
It's a fair bit faster than a GTX 970 unless you're using it on PCIe 3.0 - and even then, generally speaking, still faster. Unfortunately prices seem to be going upwards - and value-for-money going backwards - for a while now.Oxford Guy - Tuesday, January 25, 2022 - link
I'm sure it's a far better GPU than a Radeon 9700, too.Unless...
mbucdn - Friday, January 21, 2022 - link
Junk filler. Such a sorry state for people who need a new card.Abort-Retry-Fail - Saturday, January 22, 2022 - link
Can't really believe all the shade being tossed at the Radeon RX 6500 XT. It is essentially the low-end of discreet 1080p gaming -- from a 107mm2 die with Radeon RX 580/GeForce GTX 1650 Super performance. Not asking for much, huh?
I suspect Navi 24 and upcoming variants will become the AMD 'Lexa' of old 'Arctic Islands' arch. I bet yields got to the point they were harvesting 375+ dies per wafer. It was their workhorse.
I bet this bodes well for the next round APU graphic engines, too. Let Dr Su do her voodoo on the 12 CU RX 6300M ... with Infinity Cache? buhhhh-BOOM! goes that APU
mode_13h - Saturday, January 22, 2022 - link
Interesting point. Imagine this die as a chiplet in their next APU... that would be interesting!Abort-Retry-Fail - Monday, January 24, 2022 - link
Actually, 13, I didn't think about a graphic chiplet -- though, that would be something. I had my engineer hat on in that a cut-/stripped down RX 6300m would land in the current RX 550-560 performance range. That could boost the APU Vega graphic engine 30- to 40% with 'RDNA 2.2'
And thinking like that AMD engineer, the shader cores, fancy L3 Cache, Display/Video Core Next decoders, etc, are essentially the same masks for the 6300m for the new APU. No muss - No fuss
mode_13h - Monday, January 24, 2022 - link
Yeah, I mean Infinity Cache directly tackles one of the main performance bottlenecks of APUs. If AMD can get a good boost out of a mere 16 MiB, then it's likely on its way to an APU near you!Plus, when you consider the bandwidth they got out of the Infinity Cache on this GPU, that's pretty good if it were in an APU. Certainly an improvement over regular 4x32-bit DDR5.