What is the real explanation of artificially limiting the full die of 80CUs at low clock speed in order to maintain 300W TDP while you compete with a 350W TDP card like 3090 ?
I mean the excuse of performance per watt increase from 54% of 6800 XT to 65% for 6900 XT compared to RDNA first gen is simply ridiculous.
We are talking here about a flagship card - the 6900 XT - competing with a 350W TDP monster like 3090.
AMD should have gone at least to 355W as some leaks have said before, in order to win clearly the 3090 by a decent margin.
It's possible the performance just doesn't scale much further with the extra wattage. There must be a reason, as they'd have plenty of time to make the tweak if it were that simple. Maybe the board design just can't support that power?
A slightly increase of 72CUs (6800XT) to 80CUs (6900 XT) gave a new card competitive to 3090 using the exact same clocks/TDP (for 6800 XT/6900 XT) So, the architecture scales extremely well.
Also, judging by the rumors and PS5, boost clock could be at least 10% higher.
If they don't want to support a beefier cooler for a reference card, they should allow AIBs to do it. It is very strange that AMD doesn't allow custom 6900 XT from the beginning.
Could they afraid of a nVidia respond of a 3090 Ti / Titan and they haven't prepared a RDNA2/ HBM2 card or something equivalent ?
The same clock/TDP for 6800 XT and 6900 XT cards, simply doesn't fit.
10% increase in resources to get a 10% benefit is one thing, but I'm talking about how well the performance scales with additional clock speed vs. power usage. It's possible that throwing another 50W of power at the board will only get them a few percent extra in performance.
Designing a card to soak up 300W of power isn't simple, either. Look at the issues Nvidia have had with theirs.
I'd wager they're keeping the 6900XT to themselves because they can't bin enough of the chips to make it worth sending out to AIBs.
This is just me trying to make sense of it, of course - I'm guessing. But it seems more sensible to assume a reason than to assume they'd just... skip the opportunity.
Well we saw those leaks with a card hitting 2.8ghz.. it's possible they are just holding stuff back for the inevitable 3080ti or supers from nVidia. They could also change to gddr6x, increase the bus size, add memory, add more cache and that alone would probably do the trick but if they also have headroom on the clocks with a TDP increase... It could be huge. AMD with the performance crown for the first time since the 290x.
Yes, Ampere looked like it has a frequency wall at 2.0GHz mitigated by a recent driver but I've never expected RDNA2 to have a power wall of 300W as you describe it.
All the latest leaks and rumors were talking about custom 3080 cards with more than >350W TDP.
Yes, they could be wrong as in other things regarding RX 6000 series and your guess could be right, but I doubt it.
I mean 300W could be a sweet spot for RDNA2 efficiency, but AMD should/could sacrifice a little or more of its efficiency for a little more performance.
Custom 3080 cards and more tech info about increased TDP behavior of RDNA2 cards will enlighten us in the near future.
I'm not talking about a power *wall*, just a point of rapidly diminishing returns. We've already seen that with Ampere and the 3090 vs the 3080 - the 3090 should be a LOT faster going off specs, but it just isn't. Think of the same way Ryzen CPUs overclock - or, to be precise, don't. They don't draw anywhere near as much power as Intel's best, so in theory there's "headroom", but in reality the design is actually at its limits.
AMD have spent the past 8+ years sacrificing efficiency for performance, and when they do so and only get marginal leadership it just gets criticism poured on them (as with the 290X). I don't think pushing for a few percent extra performance at the cost of 50-100W more power makes sense.
As for those leaks about them hitting 2.8Ghz - even if they're true (bearing in mind that there's a cottage industry built around overhyping AMD products and then claiming to be "disappointed"), we have no idea under what circumstances they hit clocks higher than 2.2Ghz and what the resulting performance is.
I'm going with Occam's razor on this: if the architecture could reliably and consistently hit a significantly higher level of performance for about 50W more power, and that would allow them to beat the 3090 in both performance and performance-per-watt, then they'd have done it. But they haven't. Why would they top out with a bleeding-edge bin that *just* misses the target if it were trivial for them not to?
I certainly don't want to be misunderstood, I'm very pleased by RX 6000 series announcement.
But the similarity of 6800 XT and 6900 XT cards n all but number of CUs (80 vs 72) it's very strange to me.
I have never mentioned anything exaggerated regarding clock frequency like 2.8GHz. I only said 10% more than the existent.
I don't want to repeat myself but 300W for both 6800XT and 6900 XT simply doesn't fit
Hopefully my comments were heard by AMD (!) and we will see what AIBs can say regarding 6900 XT
And I really hope for AMD to allow them to do whatever they think they have to do in order to be even more competitive with 3090, meaning to increase clocks and TDP:
Would an RX 6900XT get better FPS when running at 350+ watts? Yes - but it likely would not exceed about a ~4% improvement - and that is being generous.
Also, just because AMD has not SOLD a 350 watt card in RDNA 2 - you do realize that power sliders exist. For example: I currently (til RDNA 2 and Ampere are in stock) - use a G1 Gaming GTX 980 Ti, a 250 watt GPU, but because I can add +39% to the power slider, I can FEED my GPU 348 watts! As a result, instead of a variable boost around ~1.3 GHZ, my 980 Ti is 100% frequency stable a 1501 MHZ core, and 8 Gbps VRAM (a 14% OC on VRAM too)
There are 980 Tis with 3 pins, and on air, even those max out around ~1580 MHZ, and there is less than 1% difference FPS from my 1501. In fact, the diminishing return REALLY start around 1400 MHZ for GeForece 900 (Maxwell) series - you can throw 500 watts at the card with LN2 cooling nearing 1700 MHZ core, and it BARELY touches frame rates.
Look at RDNA 1, and Ampere - you can add available power ALL DAY - see the ASRock 5700XT Taichi - and while you MIGHT even surpass 10% extra frequency, but performance? Closer to about 2% - especially with Ampere - but Turning was similar as well. The 5700XT GPUs all boost as fast as they can, and notice how there is BARELY EVER more than 2% speed between the best and worst (referencc) models!
The ONLY thing that might make frequency scale better with EXCESSIVE power in RDNA 2 is the Infinity Cache. If Overclocking increases the bandwidth and decreases the active latency to said Infinity Cache. But even then, every uArch has a limit, and a point of diminishing returns.
AMD COULD have sold the 6900XT at 350+ watts, but that would, at best, probably net it another ~5% FPS being generous.
Believe me when I say, IF AMD COULD have sold a 6950XT with insane clocks and power budget that actually STOMPED the RTX 3090 - they DEFINITELY WOULD HAVE! (They still might, Water cooled edition or something) - but it won't be worth the price, hell the 6900XT isn't worth it with the 6800XT being ~$350 cheaper, and seemingly 98% as fast or more. And after ACTUAL overclocking, I bet 6900XT is all but tied.
AMD is finally being seen a a "Premium Product" - for the last 15 years AMD were B tier at best, but Zen2 has shown AMD can provide class leading performance and efficiency.
Notice the LACK of AMD GPUs in Laptops? Notice how the few laptops with AMD GPUs could NEVER keep up with Nvidia at the same power? AMD has NEVER been a big player in Mobile GPUs - but with THIS EFFICIENCY AND DESIGN - AMD are poised to STOMP Ampere GPUs in Laptops! Which is a GIGANTIC and growing market - my bro has a RTX 2070 laptop, he gave me his old GTX 1060 model! Those are the only PCs he buys.
And now AMD has both CPUs and GPUs that can WIN MOBILE GAMING! That is the master plan, after capturing ~50% or better of the Desktop Discrete GPU market anyway!
Yes, I would imagine the choice of clock speed would also be based on no. of units they can get with 80CUs at chosen clock. So it's quite possible this was purely yield based decision.
Yeah, really not sure what d0x360 is on with this RDNA 1.5 crap. I just did a Google and it looks to be people misunderstanding Microsoft's marketing materials about "RDNA 2 integration" (e.g. software and hardware) and then extrapolating from there - in this case, to an extent that can only be described as absurd.
Full 80cu chips Are very rare! So it is no use to give aib something that They don`t have enough to sell... for example sua would need several 6900xt chips... and because amd does not have that Many. No need to sell them to asus.
Ahem...AMD could always release a 6950 and 6950xt. Think gddr6x instead of just 6, bigger bus, maybe more infinity cache, higher TDP so we can see those magical leaks that showed up to 2.8ghz, more cores, 4 more gigs of memory..and priced at $1300
Now they would be winning by a large margin and have a lower price. They could do the same across the entire line. 6850xt, more ram, gddr6x, more cache..that alone could allow it to spank the 3080ti (20 gig variant)
GDDR6X = more power use, more cost Bigger bus = non-trivial adaptation, more power and cost More infinity cache = new die, WAY more cost 4 GB more memory = more cost, power, and a weird bus arrangement
Maybe that would get them enough performance for leadership, maybe not, but it would spoil their obvious focus on producing performance-competitive cards that can still generate high margins. They were unable to do that with Fury and Vega because of the costs of the HBM vs the performance achieved - switching to a larger bus or a new die would have a similar effect, not even accounting for the extra costs of qualification. GDDR6X is a *maybe* for a refresh part, I guess, but that would ruin their power efficiency.
The whole point of Infinity Cache is to remove the bandwidth constraints of GDDR6 to achieve an effective bandwidth that is higher than the one the 3090 gets with its 384-bit bus and GDDR6X. In fact quite a bit higher. There would be literally no point in using GDDR6X for AMD as the effective performance increase would be minuscule. I do think a 6950XT is possible if AMD feels the need for it but it would be a limited issue version using very rare binned chips to get slightly higher clock speed, and would almost certainly double the VRAM to 32GB.
When will we see reviews? AMD looks like on paper to have knocked it out of the park. They are getting 2x the bandwidth of a 384 bit bus via 256 bit bus + 128MB cache. The 6900 XT is trading blows with the 3090. I'm actually impressed.
Yep cheaper not to change web shops... They still sell all... to bots but in any way. The amd suggestion means that They should make their web Pages better and that cost some money... Why would They do it?
That way my key takeaway too, it's not mindblowing performance or anything. But like the first gen Ryzen AMD is back to battle for the high-end market. And if they can actually deliver volume for the holidays you might find many of these under a Christmas tree if Nvidia doesn't work out their availability issues. You don't get very far with IOUs for a present.
That is the big question that remains to be answered, but considering their confidence and execution as of late, I would be very surprised if they were not able to deliver.
I took the comment at face value. With Intel getting back into the discrete GPU game, the fact that AMD is still launching cards that can swing for the title again makes the market that much less friendly to a newcomer.
Technically you're supposed to use a 650W minimum on the 2070S (at least, according to nVidia's specs), but a quick search shows it tends to use about 215W at max load. With the 6800XT being a 300W card, practically speaking you probably aren't overtaxing your PSU, since the 3600 is a 65W part, but you might be pushing it. Going to a 650W is probably safer.
We'll see. So far it looks like they're working on drivers and games actually supporting RT via DXR. But even if pure RT performance can't quite match NV's, what matters is how the GPUs perform in actual games where most of the work will still be rasterizing.
That's squarely in rumor territory, but the rumor sites have 6700 XT at 40 CUs (2560 Stream Processors) with 12GB of GDDR6 on a 192-bit bus. If they cut the CUs in half compared to 6900 XT, then maybe like half the performance of the 6900 XT?
More likely to be 60/40 - the 6700XT is liable to have higher clocks and be getting more out of its available resources than the 6900XT can, which is more than likely power limited. It may depend a lot on how they implement the Infinity Cache on lower models, though.
It uses the same chip and will probably have a ton of overclocking headroom compared with the high-end chips. We'll see on release, but based on AMD's past cut-down cards, it's more than likely worthy of the name.
So excited. If I can't pick up a 3070 this week, then at least I know I have a viable alternative when the 6800 is released in a few weeks. Its now a race to see who can provide the most inventory.
> The inclusion of sampler feedback also means that AMD will be able to support Microsoft’s forthcoming DirectStorage API. Derived from tech going into the next-gen consoles, DirectStorage will allow game assets to be streamed directly from storage to GPUs, with the GPUs decompressing assets on their own. This bypasses the CPU, which under the current paradigm has to do the decompression and then send those decompressed assets to the GPU.
This is misleading. You can do direct storage without sampler feedback. It's just harder to make use of the capability. I'm unclear whether DirectStorage proper requires it, but if it does, the causality is the other way around—that is, everyone was implementing it so it saw no point making it optional.
Really liked what AMD did last year with the simultaneous release of NAVI and Zen 2--available on the same date. Sort of wish they had done that this year, too. Ah, well, back to "hurry up and wait"...I also worry about the "starting at" pricing for the GPUs. A $579 2080Ti-killer is great--while a $700-$800 2080Ti killer is not so hot, imo. But If I can get the 70 CPU GPU (6800XT) for close to the "starting at" price AMD states, that is probably where I'm going!
Maybe they wanted Zen 3 to shine and not let RDNA2 tarnish it slightly by association? Not that RDNA2 isn't impressive but it's just that Zen 3 is two leagues ahead based on AMD's presentations. They had to Sex up RDNA2 by including over clocked data which shows their insecurity. When you consider the synergy between the two it does seem odd to separate them.
Honestly, RDNA 1 was like Zen 1 to me, finally something somewhat competitive at a reasonable value, RDNA 2 is like Zen 2, trading blows for different tasks, but can't claim an absolute crown. Hopefully RDNA 3 will outright be faster.
Where did you get this ridiculous idea? AMD not offering any expensive consumer cards doesn't say anything about whether people can afford nonexistent expensive ones.
Probably from every AMD related comment section whining about prices day in and day out when performance being offered keeps going up. Seems all they care about is price.
But the whining's not been from "AMD fans", as best I can tell - most of it's goalpost-moving by people who have "suddenly decided" to buy Intel instead.
where were those people when intel/nvidia were raising their prices ? if intel/nvidia do it, its fine, if amd does it, its wrong, and a federal offence.
Pretty much. I can only speak for myself, but I've been whining about Nvidia's prices since Pascal released - and I had a pretty good whine when AMD slotted right into Nvidia's existing price structure with Navi 10, too 😅
They will be releasing lower-tier RDNA2 products eventually, too. They are tackling the top end of the stack first, though, as we can see from the execution of their strategy. All good things in time, my friend.
Okay Sonny, riddle me this: How do you reconcile these claims with the hard fact that Nvidia's most popular GPU by a long way, the GTX 1060, directly competed with and cost the same as (or less than, if it's the 3GB variant) AMD's most popular GPU, the RX 580? How would it be that the substantially slower and significantly cheaper GTX 1050 and 1050Ti cards also rank higher than the most popular AMD option?
It sort of looks like most buyers are cost-sensitive, and that has nothing to do with which "team" you support. It's seems a bit like like Nvidia's most popular products are some of their cheapest. It's *almost* like you're full of shit 😅
Sure, but not everyone cares about these things. If you want to do lots of machine learning with CUDA and stuff like that, nobody is stopping you from investing in nvidia. AMD is clearly targeting gamers, and what gamers care about are the benchmarks showing who has the highest FPS at the best price.
Yes, except during the launch of these GPUs, if you actually watched it, they explicitly stated on multiple occasions that these GPUs are being targeted at gamers.
Is that why Nvidia keeps getting spanked in recent Supercomputing contracts, because your AI/Machine Learning APIs are so vastly superior that these government facilities couldn't be bothered to use them?
When those "little charts and benchmarks" represent the games I'll actually be playing with the cards I buy - and when they show AMD competing *even without the benefits of huge investment in developer relations* - then I'll use them to pick whichever card gives me the performance I want at the budget I want and be happy with it.
But oh, those goalposts, they can always be moved somewhere else for those who are motivated enough to carry them around all day. Feel free to make your purchasing decisions based on whoever spends the most sponsoring developers, if that gives you a sense of value 🤷♂️
Does AMD ray tracing work on current games ?? For example can I use ray tracing in Control on these cards ??? Or nvidia RT is different and developers need to patch existing game to support AMD ray tracing ??
Honestly I feel going with RTX 3070 is better choice since I can use DLSS2.0 and RT on the upcoming Cyperpunk 2077. Without DLSS, RT will not run with good fps anyway.
The thing is, DXR is part of the DX12u spec so assuming Intel ever gets anything out, game devs have motivation to support DXR even if they have to patch it in later, since in theory it will become the more common implementation.
Because Intel still will be making integrated graphics right? So eventually DXR should be supported on even on the potato machines of the future.
That said, I never said it would run well - also I sincerely hope nVidia has a way to run DXR only games efficiently.
Again that info might be out there somewhere, but I sure don't see it brought up much.
W/o having an benchmarks to go on yet, it looks like they've overpriced the 6800 based on the raw performance numbers. The XT is 33% FLOPier but only costs 13% more. I know that actual performance never ends up being 1:1 scaling with theoretical numbers, but pricing it at $550 or so would seem to make more sense.
Yeah, I'm a bit confused by the 6800 pricing. I guess they're confident it will beat out the 3070, but it would have been nice to see them do that at a closer price to give a definitive reason to buy their product. I guess that's the downside of competing with a cut-down larger chip vs. having a smaller die like the 3070 uses.
Given that 4GB is enough for 1080p as of yet (according to gamernexus testing) and 8GB can handle 4k without an issue, I doubt that the 8 VS 16GB argument will hold any water until these cards are several years old, at which point they will have two newer generations to work with.
Some people keep their GPUs for that long, but I am a bit skeptical about its relevance. Comparing 1440p performance in modern games between a GTX 970 and 980M (same hardware but with 8GB RAM) I haven't been able to pick out a scenario where I could overwhelm the RAM on the 4GB variant without dropping the frame-rate too low on the 8GB variant for it to matter anyway.
3080 appears to be the better GPU overall, at least on paper, with twice as many shader cores, 40% better FP32 theoretical performance and 320-bit GDDR6X memory (760 vs. 512 GB/s).
Personally, I doubt the 6800XT would manage to surpass 3080 at 4K, or at any resolution for that matter. The rasterization performance simply isn't there.
It's sad that AMD's stuck in an endless loop of catch-up when it comes to GPUs. I was expecting more, to be honest.
I'm thrilled there's reasonable competition on both the CPU and GPU fronts, but let's be real - a company without the deep pockets of either and Intel or an Nvidia even competing on BOTH fronts is amazing, much less having us all hoping for a "de-throning".
Comparing shader core counts between different architectures is meaningless. And if the Infinity Fabric does what they say it will, that may mean AMD has better effective memory bandwidth.
>> It's sad that AMD's stuck in an endless loop of catch-up when it comes to GPUs. I was expecting more, to be honest.
Really? I was expecting *less*. I mean perhaps if the 6900xt was only able to match the 2080Ti or 3080 I'd agree with you, but it seems to be toe to toe with the 3090 while using less power, and for a cheaper price. It seems to compete everywhere except for in ray tracing.
I'm failing to see how essentially matching the performance of a competitors chip that was released only a month ago is a "disappointing catch-up." What exactly would have been reasonable to you? Do they need to have a solid +50% on top of the 3090?
Someone's still counting shader cores and "theoretical FP32 performance" even after it's been conclusively established that Ampere is deeply misleading in both aspects?
Someone's still claiming a 4K victory for the 3080 even after we have benchmarks showing parity?
You were expecting to be able to use these lines to troll, so you did, regardless of the reality of the situation. Sad stuff.
The 3080 has 4352 shader cores. Don't believe the marketing lie that nVidia decided at the last minute to tell.
Each shader can do either one INT32 and one FP32 per batch, or two FP32's per batch.
The 2080 Ti also has 4352 shaders, but can't do two FP32's per batch. Given that the 2080 Ti and 3080 run at about the same clock speeds (ignore the published boost clocks - another lie nVidia tells, as their cards automatically overclock beyond those figures), the performance uplift of the 3080 over the 2080 Ti is a rough proxy for how much of that extra FP32 capacity is useful in games. The best case seems to be about 30% averaged across a number of games at 4K, and less at lower resolutions. So you could say the effective shader count of the 3080 is about 5600.
The 6800 XT has 4608 shaders, and ~10% higher clocks. That shouldn't be enough to make it match the 3080 at the same IPC, so RDNA 2 must have notably higher IPC than Ampere.
Alas, the marketing has done its work, and the CA wildfires have prevented Anandtech from properly covering the details. No other publication seems to be particularly interested in correcting the nonsense, and the comment bots are doing their bit to propagate the narrative.
I feel like the RX 6800 XT is the best value proposition here. It undercuts by $50 and pretty much matches the RTX 3080 blow for blow. Like most of you probably do, I feel like the RX 6900 XT is more of a halo product than anything to assert dominance in TITAN/ThreadRipper like-fashion. On the other hand, the RX 6800 seems like it is priced a tad too high for my liking for it to make a meaningful impact. Sure, it outperforms the RTX 3070 and RTX 2080 Ti by a claimed 10-20 performance lead and it has a juicy 16GB, but it costs more. In terms of optics, hitting that $499 price point I think would have done more for AMD here to make inroads. If an RTX 2070 Ti comes in at $599, yeah, I will feel the RTX 6800 succeeded in the end. As it is, though, I feel like the crown jewel of the line up is the RTX 6800 XT.
Corrected: I feel like the RX 6800 XT is the best value proposition here. It undercuts by $50 and pretty much matches the RTX 3080 blow for blow. Like most of you probably do, I feel like the RX 6900 XT is more of a halo product than anything to assert dominance in TITAN/ThreadRipper like-fashion. On the other hand, the RX 6800 seems like it is priced a tad too high for my liking for it to make a meaningful impact. Sure, it outperforms the RTX 3070 and RTX 2080 Ti by a claimed 10-20 performance lead and it has a juicy 16GB, but it costs more. In terms of optics, hitting that $499 price point I think would have done more for AMD here to make inroads since $499 is the stretching point for many people's budgets, especially with the current world situation. If an RTX 2070 Ti comes in at $599, yeah, I will feel the RX 6800 succeeded in the end. As it is, though, I feel like the crown jewel of the line up is the RX 6800 XT. I am tentatively thinking of upgrading to it from my GTX 1080 Ti after the reviews (or enough leaked benchmarks) pour in.
Without DLSS, in the games where it is enabled for NVIDIA it will mean that AMD will be totally crushed and therefore is nowhere near Nvidia. DLSS2 was a big hit for Nvida compare to the flawed DLSS1 and is, in my view, THE biggest reason/Selling point for the already powerful 3000-cards.
But isn't DLSS on quality mode actually outputting a better image than the target native resolution in many cases? No point in being ideological about this.
It is not possible for upscaling to produce a better image than native resolution. Looking at DLSS stills compared to native there are obvious differences. The upscaled images aren't as sharp.
Not, it objectively is not - and you lose a huge chunk of the FPS benefits on Quality mode.
To be clear, DLSS 2.0 on Quality mode looks *good* - but it's only comparable to native if you smear the naive res image with an inadequate AA technique. Personally I find the differences painfully obvious, to the extent that I'd rather drop some detail levels to raise FPS, but I'd absolutely use DLSS over having to run the game at a lower-than-native resolution if those were the only options.
It also so, so deeply subjective. Some people really like high levels of noise reduction on digital camera images, for instance - whereas I think it makes things look weird and cartoonish. DLSS (even in Control) looks like noise reduction algorithms to my eyes.
No, it isn't. It's introducing fake information, so the image often doesn't match what it's supposed to look like, and it introduces temporal artifacts like flickering textures.
In no way is the latest DLSS higher quality than native rendering.
You're also talking about a market where many buyers are trying for the absolute best performace. These aren't budget cards, hell some are on their own the price of almost some entire systems.
Not exactly "totally crushed" - it's a trade-off you can choose to make. It's already been objectively demonstrated that most people can't readily pick a winner between DLSS and AMD's RIS when playing a game in motion, and the latter isn't game dependent.
I favour RIS, because I really don't like the way DLSS resembles heavy noise reduction on a digital camera - but I acknowledge that it has its issues, like crawling and over-sharpening. Neither is perfect, but acting like they're not comparable is a bit silly.
I actually feel the real competition/spoiler for the 6800 is the 6800XT, not the 3070.
The 2070S were $100/25% more expensive than the 5700XT (locally actually more as Nvidia cards are marked-up more) while offering the same amount of VRAM and, at best, a single-digit performance advantage.
Comparatively the 6800 is $80/16% more expensive than the 3070 while offering *twice* the VRAM and a solid, double-digit performance increase.
Assuming the performance numbers and pricing pan out there would have to be quite specific edge cases to argue for the 3070.
More problematic is the 6800XT for only $70/12% more...
I'm very much enjoying the steady retreat of the usual suspects from "it will be terrible" to "you can't prove it will be good". On launch I suspect it will be "I already bought a 3090 lol".
They use higher quality textures for these resolutions. It looks like to get a really better results you need to have specific textures for specific resolutions.
There's a lot more than just the frame buffer. 4x more pixels means 4x more pixels shading operations and the associated resources. Higher quality textures are also required to take advantage of the higher resolution.
Bear in mind the game is often not the only thing running on the system. You will also have the desktop compositor running at all times, plus apps like browsers (which are commonly run fullscreen). a double buffered swapchain in 8K is a minimum of 256MB *each*, so you can easily consume a few gigabytes just by having a handful of apps open.
It is not, you simply chose to infer something I didn't say - namely that such a scenario wouldn't work at all.
Video memory is virtualized and placed in system memory when there is pressure in VRAM. You just get less performance when it's there. Running on those 512MB GPUs is more than doable, but you'll see noticeable performance deltas. While different under the hood (i.e. the memory is still directly accessible, albeit at much slower bandwidth), conceptually it's quite analogous the old days when people didn't have a lot of RAM, and your computer slowed down as you needed to constantly page in data from the page file.
Try running an 8K display on a 1GB GPU and let me know how it works out for you after you start actually running more than one app on it at a time.
Games were less demanding then, and actually you couldn't run that many applications simultaneously without experiencing slowdowns.
Just look up "ways to speed up your PC" from that time. One of the main recommendations was to close any unwanted applications.
It's only relatively recently that not worrying about what applications are open when doing something intensive (and applications for all sorts of different things) has become the norm. Hell, I still close many applications before launching a game or video editor out of habit.
That's not what takes up the memory at higher resolutions. Various kinds of processing require a certain amount of memory per pixel to do. When you quadruple or double the number of pixels, memory requirements can go up by quite a bit.
It's not clear why the author thinks the 6800 would be favored over the 6800 XT - it looks like it offers 75% of the performance, for 90% of the price. The 6800 XT is actually the better value in terms of performance-per-dollar, and neither of the cards are cheap.
If you're going to spend over $500 either way, you might as well get the card that is quite a bit faster.
The 6800 is 16% more expensive than the 3070 with up to 18% better performance. In the real world you will probably not find a 3070 at $499 for six months. Assuming you can find one to begin with.
AMD will sell all the 6800s they can produce with good margins too. Actually they will sell all the 6900xts and 6800xts too.
There is no way AMD forecasted with the almost zero availability of Ampere in mind. With their shortage of wafers that are being spread thin by a current shortage of Renoir and the planned uptick in EPYC rollout they don't have spare wafers to just throw at higher production of NAVI2.
There will most likely be long periods of out of stock situations for these cards for the next six months. Actually because of demand well above AMD's projections.
It should now be apparent as to why Nvidia launched Ampere prematurely. Their insiders leaked the fact that AMD was going to match them in performance at the top end. Using less energy too.
After all, wasn't efficiency a Nvidia talking point ever since the 290X was the performance king?
Efficiency has always been an Nvidia talking point when they're good at it, and for the rest of the time it's "we have the fastest card". Never seen so many fanboys swivel their heads so fast as when Nvidia switch position on that metric.
You're probably right about availability, though. It's going to suck for everyone for months.
Too bad, missed an opportunity to really tank NV here with 830mm^2 reticle limit while NV stuck at 628 on samsung. Your cards would be $1600+ and BEATING NV in everything, but you chose to lose. AGAIN. Bummer. You were even on the 2nd use here at 7nm so not even as risky as huge on new process. DUMB. Still chasing "good enough" instead of KINGS. You are 50w less because you are losing to a larger die. I'd be impressed at 50w less and WINNING.
What's a 68000 card? Fix that headline? AMD reveals a 68000? :) For a second I was excited about a massive card...LOL. But can't be bothered to read the 2nd page now.
Setting aside whatever yield issues you'd have at 830mm2, TSMC fab availability doesn't grow on trees. Give hol season demand from Ampere, Xbox, PS, AMD, Huawei last time buys unlikely you'd get enough volume to cover the development cost.
It'd be transparently stupid, which is why AMD don't do it. Nvidia only get away with it because they tricked gullible fanboys into bootstrapping their datacentre business, and now they have a market that will pay any amount of money for a supersized die.
Jian, we have duopoly. Duopoly greatly reduces the pressure of competition.
Monopolies raise prices artificially and duopolies are only an improvement over that — certainly not the same thing as the cure (an adequate minimum level of competition).
People think the Intel—AMD competition is more exciting than it is because both companies have had phases of disastrous management.
"but you chose to lose. AGAIN" Yes, because making a card at 7nm reticle limit is so EASY and OBVIOUSLY GOOD! Why, pumping 400W+ into a GPU is just *TRIVIAL*. Fuck yields, what do they have to do with margins anyway? Screw the actual market, let's make cards that only posers can afford! 🤡
"But can't be bothered to read the 2nd page now" Now you know how everyone feels about your usual comments 🤭
AMD will match the product stack of the competiion. It's common sense. Of course quite few cards will come out using RDNA 2 - again matching the competiton product stack.
oh boy those Ryzen money paid off for AMD over the last few years. They made some major investments to showcase these gains and while its nice to see how far ryzen has come, on the gpu side, i'm just mind blown. I was willing to bet money that they will not tie nvidia anytime soon but if those slides are true then wow it is very impressive indeed. Its gonna be a great time to build a gaming PC this holiday season provided that you find stock lol.
“'more money than sense' category" "At $650... this is the AMD card that mere mortals can afford."
Why is it people can spend thousands and 10s of thousands on golf, musical instruments, travel, a nice car, etc. and no one bats an eye (“His money, nothing wrong with that”)... but as an IT professional, digital creative and gamer, if I spend several hundred on a video card I'll use for years, I'm obscenely, irresponsibly wealthy?
Yes, top-end PC gear costs more than a pair of blue jeans. But how is even $1,000 something that has you picture enthusiasts laughing maniacally in their top hats and monocles while loaning Bezos money?
I hear that. I know everyone's financial situation is different but we're still talking about stuff measured in hundred's of US dollars. Even the halo products cost less than a nice smart phone and those are everywhere.
Most of the people whining still live at home with the parents. The rest of us that are working adults with careers understand there are far more expensive hobbies than Desktop Computers. $650 ain't nothing get car!
If my kid wants to go to soccer team it's 100 USD / year, swimming classes? 100 USD. Archery? About the same. And so on. The 9800 GT coated below 200usd to meet the ATI top tier 5000 series cards. Then shit got crazy apparently...
That used to get me angry when the kids were little. We were OK, but i am sure there were kids who missed out cos the rich guys in charge wanted to ego trip on doing it the fancy way.
a; ball, like color t shirts & sneakers should be fine. I figure $100 went on boots & uniform alone- mandatory items.
I wonder if the Smart Access Memory could be made available outside of gaming. If you have a GPU with 16GB of memory would you really need more than 8-16GB of system memory. I can see it being limited to workloads that use the GPU but honestly there are a lot of workloads that are GPU accelerated at this point it might have a larger potential impact than you might think.
While I'm sure there's some value outside of gaming, it's not really suitable for many types of tasks. Reads over the PCI aperture are usually painfully slow and can be on the order of single digit megabytes per second.
Writes are typically more reasonable, but you're still talking about writing over PCIe to the device - still not suitable for general purpose. In games, this memory is commonly used for things like vertex buffers, or DX12's descriptor heaps where they are not read from at all and written to infrequently/in small quantities, and where the limited memory bandwidth is less important than the cost of setting up staging resources and instructing the device to make copies which would have even greater overheap and code complexity.
It also has some value when coupled with the new DirectStorage APIs since the PCIe IO space is directly accessible, which gives them more efficient access to the frame buffer to do direct copies over PCIe from the NVME controller. On platforms which dont support this, you'd have to shuffle memory around in VRAM so that the texture uploads can go to the first 256MB. That's also the best case scenario. In most cases you would have to create a staging surface and have DirectStorage copy there first, and then transfer to a non-visible region of VRAM which is an extra copy.
In non-gaming graphics applications, most things are not really necessary in real time, so it's commonly more efficient to just take the time and manually upload content to VRAM in whatever way is convenient. The startup costs are usually not that big of a deal unless you're repeating the task dozens or hundreds of times.
@ Ryan Smith: AMD as posted the complete specs for range on their site now. This includes CU count, stream processors, TMUs, ROPs, Ray Tracing accelerator cores, Infinity Cache size, memory clockspeeds & bandwidth, etc. for all three cards. See here: https://www.amd.com/en/products/specifications/com...
@ bill44 - I noticed on AMD's specs page that it doesn't mention HDMI versions - which is curious. It does say that its DisplayPort is v1.4 with DSC support; but next to HDMI, it just says "Yes" under each card (like, duh). I'm not reading into it too much yet, but it does seem like if the cards had full HDMI 2.1 support that AMD would like to promote that front-and-center... We will see I guess.
That depends on how you define "improved"; do you mean transistor density or some other qualitative improvement. Per the article below TSMC's 7nm+ (called 7FFP in the article) has ~114 million transistors per mm^2 while Samsung's 7nm node has ~95 MTr/mm^2. On the other hand TSMC's 5nm node has ~173 MTr/mm^2 while Samsung's 5nm node has just 126.5 MTr/mm^2. In other words Samsung's 5nm node is barely denser than their 7nm node - it's basically a 7nm+ node rather than a 5nm node.
Of course these are the highest transistor densities of these nodes which are usually employed for *mobile* SoCs and/or when the focus is on power efficiency rather than high performance. The node variants TSMC and Samsung use for high power GPUs and CPUs are *far* less dense. Nevertheless they are still representative of the average density of each new node (when the densest variant of a node gets denser its middle and lowest density variants will also get denser).
Tl;dr TSMC's 5nm node (at its highest density variant) has almost twice the transistor density of TSMC's original 7nm node, representing almost a full node. In contrast Samsung's 5nm node (again, its highest density library) is just ~33% denser than their 7nm node, i.e. it is merely 1/3 of a node. On top of that, per the article below, Samsung's 5nm node has quite a higher wafer cost than TSMC's 5nm node, despite being far less dense! For more : https://semiwiki.com/semiconductor-manufacturers/s...
People bought Pascal at high prices. Then they bought the RTX 20 series at even higher prices. The ceiling was smashed and now we're all bathing in glass. Yaayyyy
Re the seeming "decadence" of these price tags for a "gaming toy", its worth noting that there is a global trend for urban youth to no longer aspire to a car.
Around here, $650 is little more than half the annual fixed ~taxes alone.
AMDs lower power could be a more effective sales influencer than it seems. ~600w seems sufficient for a 6800xt, but the competition would need an expensive psu upgrade.
"Like DLSS, don’t expect the resulting faux-detailed images to completely match the quality of a native, full-resolution image. However as DLSS has proven, a good upscaling solution can provide a reasonable middle ground in terms of performance and image quality."
Doesn't DLSS in quality mode actually have better image quality than native while still running faster?
DLSS is an upscaling solution and Ryan was talking about upscaling from lower resolution images to higher resolutions (which is natural because that is what upscaling is). Now, ask yourself whether DLSS that has upscaled from some lower resolution to a higher resolution image can beat the quality of an image rendered in that higher resolution, natively. The answer is obvious and it follows from DLSS being an upscaler. The best high resolution images require the best graphics cards whether DLSS is available or not.
Thanks Ryan! Question: Any news or rumors on yields of Big Navi GPUs? Here is why I ask: A few weeks ago, SONY announced or admitted that they wouldn't be able to ship the expected number of PS5 units, Now, the PS5 GPU clocks similarly high as the Big Navi Cards. SONY's shipping problems are apparently due to lower than expected yield of their APUs; in contrast, MS supposedly doesn't have that problem, but the Series X clocks significantly lower than the PS5. Thus, my question: does AMD have enough yield of high-clocking GPUs?
This is what happens when actual engineers run a tech company. Hard work makes competent professionals, which grows the skills they have, and soon enough, are able to perfect their skills and do remarkable things. I'm waiting on the reviews like everyone else, but I have confidence that Dr Lisa Su and her team can pull this off. They did it with the CPU wars, why not with the GPUs as well!!
I think it's fair for AMD to turn on Rage Mode for apple to apple comparison. I have a RTX 3090 and GPU Boost is on by default. I had no idea how aggressive it is until just out of the box it CTD all my gaming sessions. It's only unfair if GPU Boost is off as default from Nvidia,
I tend to agree, but the two aren't really comparable. Rage Mode is apparently nothing more than a one-button power limit increase, meaning it's not actually overclocking at all - it just lets the boost clock go higher for longer than it otherwise would. nVidia's GPU boost, however, is a true overclock, as it routinely goes well beyond the published boost clock speed, making the latter value meaningless.
What they actually did for their 6900 XT vs 3090 comparison was give the former more power to play with, and provide the benefits of Smart Access Memory. The latter seems to be able to provide more of an uplift than Rage Mode, though it varies considerably from one game to the next.
I would have preferred that they show out-of-the-box values for their comparisons. Using Rage Mode increases power usage (obviating the power advantage they're pushing), and SAM requires a brand new CPU that not everyone can reasonably even use once it's released (AM4, like Intel's consumer platform, is for toy computers, and doesn't have enough I/O for a real setup).
If purchasing a RTX 3090 is a reality for anyone who would consider the 6900XT, they could literally build a X570/5600X system with that $500 savings and get to use SAM. That is assuming they are transitioning from another DDR4 based system. These products are targeting gamers as a priority, for which X570 offer plenty of IO for most. Few people need more than 3 PCIe cards of any type, and/or more than 2 NVME drives. After a GPU or two, most other things use way less bandwidth, and USB 3 is still an option. Even if you need something more extravagant there are options with built-in 10Gbe and thunderbolt, allowing you to still keep slots available.
RX 6800 XT will probably have a board power closer to 280w and 270w during gaming compared to the nominal 300w.
Frame buffer of 128MB written 60 times per second and then read to output = 15GB/s. If the infinity cache is only used for the frame buffer then it's worth about 15GB/s raw bandwidth or equivalent to adding 30GB/s to the GPU RAM. If we are just talking about a last level cache & use determined by game/application then it could be worth much more. Memory compression can then be used to improve the usable bandwidth. Ray tracing and special effects that re-read & modify the frame buffer may make more use of it. But there is a limit to it's usefulness. Microsoft using several GB with extra bandwidth in their next console design so 0.125GB seems very small. The RX 6800 XT seems to be the sweet spot of power & performance with the non-XT not far behind. Availability will be the key. Nvidia RTX 3080 & 3090 have a 6+ week delay from launch (after initial stock ran out) and the waiting list grows 2 weeks for every new week of orders until demand falls and/or supply increases.
Brilliant work, AMD. On a side note, I wonder whether RDNA2 will ever find its way into the last of the AM4 APUs. Zen 3 + RDNA2 will be pretty nice for people coming from Raven Ridge or Picasso.
Not for AM4 - the first RDNA 2 APU will be Van Gogh. The desktop APU with RDNA 2 won't be due until AM5 arrives, and it'll benefit from the additional memory bandwidth.
Was hoping, against hope, these GPUs would make it onto AM4. Oh, well, as long as there's a Zen 3 APU, carrying either Vega or RDNA, I'll be quite happy. I'm on a 2200G, but it runs well, and I've got no complaints. Just wish encoding were a bit faster at times. And as for the Vega 8, happy with it too, because I don't really play games any more, except for some older titles I missed and favourite classics. I am looking forward to the mythical HL3 though :)
Cezanne's what you'd be looking for! Zen 3 + Vega. Hopefully this time it'll get a release on desktop. No guarantees that it would work in an existing motherboard, though.
Thanks, I'll keep an eye on Cezanne and perhaps get a 6C/8T one in a few years' time. Looking forward to that Zen-Zen 3 IPC boost and higher clocks. Motherboard is B450 Tomahawk. Hopefully it'll get support.
"Rage Mode" isn't really an "auto-overclocking" feature at all. AMD explained to a bunch of major tech press (like GamersNexus) that is simply slightly bumps up the power limit & fan curve, but does NOT touch the actual clock-speed setting at all. This is good according to AMD for a +1-2% performance bump on average. So basically, the RX 6900 XT, really is just THAT fast. Even without Rage Mode, it'll still be neck & neck.
How can the 6900 XT be roughly as fast (or slightly faster) as the RTX 3900 in games when the latter card has an astounding 15 TFLOPs of higher FP32 performance? Are the Infinity Cache, Rage Mode and the Smart Access Memory really enough to compensate for the much lower (nominally anyway) FP32 performance?
Or do these games also support FP16 precision (at least partly for shading some fast action scenes where higher FPS are a higher priority to quality), where I assume Navi 2 still has twice the performance while consumer Ampere offers ZERO extra performance (since Jensen Huang would hate to not sell to even a single AI training or inferencing customer their over-priced professional cards...)?
That FP32 performance quoted for the RTX 3900 is a theoretical peak that can *never, ever* be reached in actual gaming scenarios, because it's only attainable when there are no INT32 operations occurring in the shaders.
In other words: RDNA 2 doesn't have to match Ampere's quoted FP32 to match its performance in games; it only needs to generate around 2/3 of the TFLOPS to reach a similar level.
Are INT32 operations that common shaders? I thought Nvidia added INT32 shaders only recently (in Turing I think). If they are that common why did Nvidia remove them from Ampere?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
274 Comments
Back to Article
The Benjamins - Wednesday, October 28, 2020 - link
The 6800 has 16GB of ram not 14GBSub31 - Wednesday, October 28, 2020 - link
Nowhere in the article does it say 6800 has 14 GB of RAM. It does say that the RAM is 14 Gbps, though, which is correct.Kurosaki - Wednesday, October 28, 2020 - link
16gbpsNikosD - Thursday, October 29, 2020 - link
@RyanAMD could kill 3090 with more TDP of 6900 XT.
What is the real explanation of artificially limiting the full die of 80CUs at low clock speed in order to maintain 300W TDP while you compete with a 350W TDP card like 3090 ?
I mean the excuse of performance per watt increase from 54% of 6800 XT to 65% for 6900 XT compared to RDNA first gen is simply ridiculous.
We are talking here about a flagship card - the 6900 XT - competing with a 350W TDP monster like 3090.
AMD should have gone at least to 355W as some leaks have said before, in order to win clearly the 3090 by a decent margin.
Spunjji - Thursday, October 29, 2020 - link
It's possible the performance just doesn't scale much further with the extra wattage. There must be a reason, as they'd have plenty of time to make the tweak if it were that simple. Maybe the board design just can't support that power?NikosD - Thursday, October 29, 2020 - link
A slightly increase of 72CUs (6800XT) to 80CUs (6900 XT) gave a new card competitive to 3090 using the exact same clocks/TDP (for 6800 XT/6900 XT)So, the architecture scales extremely well.
Also, judging by the rumors and PS5, boost clock could be at least 10% higher.
If they don't want to support a beefier cooler for a reference card, they should allow AIBs to do it.
It is very strange that AMD doesn't allow custom 6900 XT from the beginning.
Could they afraid of a nVidia respond of a 3090 Ti / Titan and they haven't prepared a RDNA2/ HBM2 card or something equivalent ?
The same clock/TDP for 6800 XT and 6900 XT cards, simply doesn't fit.
Spunjji - Thursday, October 29, 2020 - link
10% increase in resources to get a 10% benefit is one thing, but I'm talking about how well the performance scales with additional clock speed vs. power usage. It's possible that throwing another 50W of power at the board will only get them a few percent extra in performance.Designing a card to soak up 300W of power isn't simple, either. Look at the issues Nvidia have had with theirs.
I'd wager they're keeping the 6900XT to themselves because they can't bin enough of the chips to make it worth sending out to AIBs.
This is just me trying to make sense of it, of course - I'm guessing. But it seems more sensible to assume a reason than to assume they'd just... skip the opportunity.
d0x360 - Thursday, October 29, 2020 - link
Well we saw those leaks with a card hitting 2.8ghz.. it's possible they are just holding stuff back for the inevitable 3080ti or supers from nVidia. They could also change to gddr6x, increase the bus size, add memory, add more cache and that alone would probably do the trick but if they also have headroom on the clocks with a TDP increase... It could be huge. AMD with the performance crown for the first time since the 290x.NikosD - Thursday, October 29, 2020 - link
Yes, Ampere looked like it has a frequency wall at 2.0GHz mitigated by a recent driver but I've never expected RDNA2 to have a power wall of 300W as you describe it.All the latest leaks and rumors were talking about custom 3080 cards with more than >350W TDP.
Yes, they could be wrong as in other things regarding RX 6000 series and your guess could be right, but I doubt it.
I mean 300W could be a sweet spot for RDNA2 efficiency, but AMD should/could sacrifice a little or more of its efficiency for a little more performance.
Custom 3080 cards and more tech info about increased TDP behavior of RDNA2 cards will enlighten us in the near future.
Spunjji - Friday, October 30, 2020 - link
I'm not talking about a power *wall*, just a point of rapidly diminishing returns. We've already seen that with Ampere and the 3090 vs the 3080 - the 3090 should be a LOT faster going off specs, but it just isn't. Think of the same way Ryzen CPUs overclock - or, to be precise, don't. They don't draw anywhere near as much power as Intel's best, so in theory there's "headroom", but in reality the design is actually at its limits.AMD have spent the past 8+ years sacrificing efficiency for performance, and when they do so and only get marginal leadership it just gets criticism poured on them (as with the 290X). I don't think pushing for a few percent extra performance at the cost of 50-100W more power makes sense.
As for those leaks about them hitting 2.8Ghz - even if they're true (bearing in mind that there's a cottage industry built around overhyping AMD products and then claiming to be "disappointed"), we have no idea under what circumstances they hit clocks higher than 2.2Ghz and what the resulting performance is.
I'm going with Occam's razor on this: if the architecture could reliably and consistently hit a significantly higher level of performance for about 50W more power, and that would allow them to beat the 3090 in both performance and performance-per-watt, then they'd have done it. But they haven't. Why would they top out with a bleeding-edge bin that *just* misses the target if it were trivial for them not to?
NikosD - Saturday, October 31, 2020 - link
I certainly don't want to be misunderstood, I'm very pleased by RX 6000 series announcement.But the similarity of 6800 XT and 6900 XT cards n all but number of CUs (80 vs 72) it's very strange to me.
I have never mentioned anything exaggerated regarding clock frequency like 2.8GHz. I only said 10% more than the existent.
I don't want to repeat myself but 300W for both 6800XT and 6900 XT simply doesn't fit
Hopefully my comments were heard by AMD (!) and we will see what AIBs can say regarding 6900 XT
And I really hope for AMD to allow them to do whatever they think they have to do in order to be even more competitive with 3090, meaning to increase clocks and TDP:
https://videocardz.com/newz/amd-in-talks-with-aibs...
Budburnicus - Tuesday, November 3, 2020 - link
Power has a HARD LIMIT on scaling!Would an RX 6900XT get better FPS when running at 350+ watts? Yes - but it likely would not exceed about a ~4% improvement - and that is being generous.
Also, just because AMD has not SOLD a 350 watt card in RDNA 2 - you do realize that power sliders exist. For example: I currently (til RDNA 2 and Ampere are in stock) - use a G1 Gaming GTX 980 Ti, a 250 watt GPU, but because I can add +39% to the power slider, I can FEED my GPU 348 watts! As a result, instead of a variable boost around ~1.3 GHZ, my 980 Ti is 100% frequency stable a 1501 MHZ core, and 8 Gbps VRAM (a 14% OC on VRAM too)
There are 980 Tis with 3 pins, and on air, even those max out around ~1580 MHZ, and there is less than 1% difference FPS from my 1501. In fact, the diminishing return REALLY start around 1400 MHZ for GeForece 900 (Maxwell) series - you can throw 500 watts at the card with LN2 cooling nearing 1700 MHZ core, and it BARELY touches frame rates.
Look at RDNA 1, and Ampere - you can add available power ALL DAY - see the ASRock 5700XT Taichi - and while you MIGHT even surpass 10% extra frequency, but performance? Closer to about 2% - especially with Ampere - but Turning was similar as well. The 5700XT GPUs all boost as fast as they can, and notice how there is BARELY EVER more than 2% speed between the best and worst (referencc) models!
The ONLY thing that might make frequency scale better with EXCESSIVE power in RDNA 2 is the Infinity Cache. If Overclocking increases the bandwidth and decreases the active latency to said Infinity Cache. But even then, every uArch has a limit, and a point of diminishing returns.
AMD COULD have sold the 6900XT at 350+ watts, but that would, at best, probably net it another ~5% FPS being generous.
Believe me when I say, IF AMD COULD have sold a 6950XT with insane clocks and power budget that actually STOMPED the RTX 3090 - they DEFINITELY WOULD HAVE!
(They still might, Water cooled edition or something) - but it won't be worth the price, hell the 6900XT isn't worth it with the 6800XT being ~$350 cheaper, and seemingly 98% as fast or more. And after ACTUAL overclocking, I bet 6900XT is all but tied.
AMD is finally being seen a a "Premium Product" - for the last 15 years AMD were B tier at best, but Zen2 has shown AMD can provide class leading performance and efficiency.
Notice the LACK of AMD GPUs in Laptops? Notice how the few laptops with AMD GPUs could NEVER keep up with Nvidia at the same power? AMD has NEVER been a big player in Mobile GPUs - but with THIS EFFICIENCY AND DESIGN - AMD are poised to STOMP Ampere GPUs in Laptops! Which is a GIGANTIC and growing market - my bro has a RTX 2070 laptop, he gave me his old GTX 1060 model! Those are the only PCs he buys.
And now AMD has both CPUs and GPUs that can WIN MOBILE GAMING! That is the master plan, after capturing ~50% or better of the Desktop Discrete GPU market anyway!
mirasolovklose - Sunday, November 1, 2020 - link
Yes, I would imagine the choice of clock speed would also be based on no. of units they can get with 80CUs at chosen clock. So it's quite possible this was purely yield based decision.d0x360 - Thursday, October 29, 2020 - link
Shhhh the ps5 doesn't use RDNA2... They confirmed it today. It's more like RDNA 1.5 and closer to 1. The xbox is the only RDNA2 console.Qasar - Friday, October 30, 2020 - link
d0x360, sorry but you are wrong. the PS5 does use rdna 2:https://en.wikipedia.org/wiki/PlayStation_5#Hardwa...
" The PlayStation 5 is powered by a custom 7nm AMD Zen 2 CPU with eight cores running at a variable frequency capped at 3.5 GHz.[9] The GPU is also a custom unit based on AMD's RDNA 2 graphics architecture. It has 36 compute units running at a variable frequency capped at 2.23 GHz and is capable of 10.28 teraflops "
https://www.tomshardware.com/news/ps5-rumor-specs-...
https://www.techradar.com/news/ps5#section-ps5-spe...
where did you see the ps5 doesnt use rdna2 ?
Spunjji - Friday, October 30, 2020 - link
Yeah, really not sure what d0x360 is on with this RDNA 1.5 crap. I just did a Google and it looks to be people misunderstanding Microsoft's marketing materials about "RDNA 2 integration" (e.g. software and hardware) and then extrapolating from there - in this case, to an extent that can only be described as absurd.haukionkannel - Friday, October 30, 2020 - link
Full 80cu chips Are very rare! So it is no use to give aib something that They don`t have enough to sell... for example sua would need several 6900xt chips... and because amd does not have that Many. No need to sell them to asus.d0x360 - Thursday, October 29, 2020 - link
Ahem...AMD could always release a 6950 and 6950xt. Think gddr6x instead of just 6, bigger bus, maybe more infinity cache, higher TDP so we can see those magical leaks that showed up to 2.8ghz, more cores, 4 more gigs of memory..and priced at $1300Now they would be winning by a large margin and have a lower price. They could do the same across the entire line. 6850xt, more ram, gddr6x, more cache..that alone could allow it to spank the 3080ti (20 gig variant)
Spunjji - Friday, October 30, 2020 - link
GDDR6X = more power use, more costBigger bus = non-trivial adaptation, more power and cost
More infinity cache = new die, WAY more cost
4 GB more memory = more cost, power, and a weird bus arrangement
Maybe that would get them enough performance for leadership, maybe not, but it would spoil their obvious focus on producing performance-competitive cards that can still generate high margins. They were unable to do that with Fury and Vega because of the costs of the HBM vs the performance achieved - switching to a larger bus or a new die would have a similar effect, not even accounting for the extra costs of qualification. GDDR6X is a *maybe* for a refresh part, I guess, but that would ruin their power efficiency.
SaturnusDK - Saturday, October 31, 2020 - link
The whole point of Infinity Cache is to remove the bandwidth constraints of GDDR6 to achieve an effective bandwidth that is higher than the one the 3090 gets with its 384-bit bus and GDDR6X. In fact quite a bit higher. There would be literally no point in using GDDR6X for AMD as the effective performance increase would be minuscule.I do think a 6950XT is possible if AMD feels the need for it but it would be a limited issue version using very rare binned chips to get slightly higher clock speed, and would almost certainly double the VRAM to 32GB.
fingerbob69 - Friday, October 30, 2020 - link
$999 v $1499Anymore would amount to cruel and unusual punishment
Luminar - Wednesday, October 28, 2020 - link
Is that enough for mining BC?tromp - Wednesday, October 28, 2020 - link
More interesting for mining Grin...Budburnicus - Tuesday, November 3, 2020 - link
Uhhh, GRIN is all but dead...FreckledTrout - Wednesday, October 28, 2020 - link
When will we see reviews? AMD looks like on paper to have knocked it out of the park. They are getting 2x the bandwidth of a 384 bit bus via 256 bit bus + 128MB cache. The 6900 XT is trading blows with the 3090. I'm actually impressed.faizoff - Wednesday, October 28, 2020 - link
Reviews are always closer to release date. Probably be a few days before the 18th.Luminar - Wednesday, October 28, 2020 - link
That's assuming the bots don't dry up the supply of cards for their mining operations.DigitalFreak - Wednesday, October 28, 2020 - link
I guess we'll see if vendors follow AMDs guidance. I doubt it, as vendors consider a sale a sale, no matter who it comes from.haukionkannel - Friday, October 30, 2020 - link
Yep cheaper not to change web shops... They still sell all... to bots but in any way. The amd suggestion means that They should make their web Pages better and that cost some money... Why would They do it?romrunning - Wednesday, October 28, 2020 - link
Ryzen delivered the goods - it feels like the 6800 series will do the same!Kjella - Wednesday, October 28, 2020 - link
That way my key takeaway too, it's not mindblowing performance or anything. But like the first gen Ryzen AMD is back to battle for the high-end market. And if they can actually deliver volume for the holidays you might find many of these under a Christmas tree if Nvidia doesn't work out their availability issues. You don't get very far with IOUs for a present.Kangal - Thursday, October 29, 2020 - link
...agreed, I can see the parallels!Pre-Competitive Era:
Bull: AMD FX-9590 --vs-- Intel Core i7-4790k
GCN: AMD RX 480 --vs-- Nvidia GTX 1070
During Competitive Era:
Ryzen: AMD r5-1400 --vs-- Intel Core i7-6700k
RDNA: AMD RX 5700 --vs-- Nvidia RTX 2070s
After Competitive Era:
Ryzen-2: AMD r7-3700x --vs-- Intel Core i9-9900k
RDNA-2: AMD RX 6800xt --vs-- Nvidia RTX 3080
raywin - Wednesday, October 28, 2020 - link
mythical reviews of cards you cannot buy, nvidia style, or legit supply day 1?mrvco - Wednesday, October 28, 2020 - link
That is the big question that remains to be answered, but considering their confidence and execution as of late, I would be very surprised if they were not able to deliver.raywin - Wednesday, October 28, 2020 - link
the only reason we are talking about any of this, is their "execution as of late"RealBeast - Wednesday, October 28, 2020 - link
Impressive, *if* AMD can finally get good and frequently updated GPU drivers as a priority.supdawgwtfd - Thursday, October 29, 2020 - link
What rock do you live under ..d0x360 - Thursday, October 29, 2020 - link
I'd imagine right before launch so... Soon.megadirk - Wednesday, October 28, 2020 - link
I think the winner here will be whoever has them in stock at this point.ludicrousByte - Wednesday, October 28, 2020 - link
This ^^^^raywin - Wednesday, October 28, 2020 - link
you win anandtech for the dayRezurecta - Wednesday, October 28, 2020 - link
Bought AMD @ $7FwFred - Wednesday, October 28, 2020 - link
I bought AMD at $2.50 :) but I sold. Anyways, I was able to find a 3080 in stock today.JfromImaginstuff - Wednesday, October 28, 2020 - link
Unexpected5j3rul3 - Wednesday, October 28, 2020 - link
Rip IntelludicrousByte - Wednesday, October 28, 2020 - link
Huh? Don't you mean NVidia? And hardly RIP, their cards are selling like hotcakes.1_rick - Wednesday, October 28, 2020 - link
Technically untrue, since there aren't any to buy. (Yeah, I know that'll change eventually.)raywin - Wednesday, October 28, 2020 - link
no, technically true, they sold the 1 FE they didn't hold back for suppliers, and all of the aib day 1 cards, approx 15, like hotcakes at ihopraywin - Wednesday, October 28, 2020 - link
yes, exactly 3 sold like hotcakesSenpuu - Wednesday, October 28, 2020 - link
I took the comment at face value. With Intel getting back into the discrete GPU game, the fact that AMD is still launching cards that can swing for the title again makes the market that much less friendly to a newcomer.ror1 - Wednesday, October 28, 2020 - link
I’m running a 2070super + 3600 with a 550w power supply and it’s been smooth. If I get the 6800xt, will I be able to run everything fine?1_rick - Wednesday, October 28, 2020 - link
Technically you're supposed to use a 650W minimum on the 2070S (at least, according to nVidia's specs), but a quick search shows it tends to use about 215W at max load. With the 6800XT being a 300W card, practically speaking you probably aren't overtaxing your PSU, since the 3600 is a 65W part, but you might be pushing it. Going to a 650W is probably safer.vladx - Wednesday, October 28, 2020 - link
Those power numbers are intentionally inflated to account for lower quality PSUs.snowmyr - Wednesday, October 28, 2020 - link
Sure. And ror1 could have a lower quality psu.silverblue - Wednesday, October 28, 2020 - link
The 3600 has a package power tracking of 88W, just a little bit of extra wattage to factor in.Spunjji - Thursday, October 29, 2020 - link
A high-quality 550W PSU, or a no-name one? If it's the former, you're good. If the latter, you may struggle.xrror - Wednesday, October 28, 2020 - link
All I really want to know, is if Cyberpunk 2077 will support the ray-tracing method used by these cards.bldr - Wednesday, October 28, 2020 - link
thisxenol - Wednesday, October 28, 2020 - link
Cyberpunk 2077 is using DXR. If RDNA2 doesn't support DXR (but it should, considering it has DX12 Ultimate support), then that's a "wtf" on AMD's end.nandnandnand - Wednesday, October 28, 2020 - link
DirectX raytracing was mentioned on one of the slides, so yes, Cyberpunk 2077 will support these cards.Pentium Peasant - Wednesday, October 28, 2020 - link
Ampere is the obvious choice if you care about RT and DXR.29a - Wednesday, October 28, 2020 - link
I agree PP, AMD would be bragging about ray tracing if it their implementation was worth bragging about.dr.denton - Sunday, November 1, 2020 - link
We'll see. So far it looks like they're working on drivers and games actually supporting RT via DXR. But even if pure RT performance can't quite match NV's, what matters is how the GPUs perform in actual games where most of the work will still be rasterizing.raywin - Wednesday, October 28, 2020 - link
after its release in 2077 it will support holographic ray tracinginighthawki - Wednesday, October 28, 2020 - link
They didnt show numbers in the presentation, but based on the leaked port royale numbers, nvidia is the way to go if you care about ray tracing.However to answer your question, these cards do support DXR so it should work fine.
Machinus - Wednesday, October 28, 2020 - link
Any guesses about 6700 specifications?simbolmina - Wednesday, October 28, 2020 - link
40 CU and 2080S performance.Mr Perfect - Wednesday, October 28, 2020 - link
That's squarely in rumor territory, but the rumor sites have 6700 XT at 40 CUs (2560 Stream Processors) with 12GB of GDDR6 on a 192-bit bus. If they cut the CUs in half compared to 6900 XT, then maybe like half the performance of the 6900 XT?Machinus - Wednesday, October 28, 2020 - link
50% of a 6900XT for 40% of the price?Spunjji - Thursday, October 29, 2020 - link
More likely to be 60/40 - the 6700XT is liable to have higher clocks and be getting more out of its available resources than the 6900XT can, which is more than likely power limited. It may depend a lot on how they implement the Infinity Cache on lower models, though.fatweeb - Wednesday, October 28, 2020 - link
Looks good, but still no DLSS counterpart and not a peep about raytracing performance.nevcairiel - Wednesday, October 28, 2020 - link
They did mention some super-scaling thing in as much words, but thats about it. Supposedly "coming later".EliteRetard - Wednesday, October 28, 2020 - link
Am I the only one thinking the 6800 should be renamed to the 6700XT?It's in a much lower performance class, with larger differences than the others.
hansip87 - Wednesday, October 28, 2020 - link
I think AMD strategy is a bit different than Nvidia, after all they all got 16GB. 6700 i can imagine only ends up with 8GB.Spunjji - Thursday, October 29, 2020 - link
12GB is rumoured for the 6700XT, not sure about vanilla 6700 though.Spunjji - Thursday, October 29, 2020 - link
It uses the same chip and will probably have a ton of overclocking headroom compared with the high-end chips. We'll see on release, but based on AMD's past cut-down cards, it's more than likely worthy of the name.Agent Smith - Wednesday, October 28, 2020 - link
To infinity-cache and beyond...Pytheus - Wednesday, October 28, 2020 - link
So excited. If I can't pick up a 3070 this week, then at least I know I have a viable alternative when the 6800 is released in a few weeks. Its now a race to see who can provide the most inventory.Veedrac - Wednesday, October 28, 2020 - link
> The inclusion of sampler feedback also means that AMD will be able to support Microsoft’s forthcoming DirectStorage API. Derived from tech going into the next-gen consoles, DirectStorage will allow game assets to be streamed directly from storage to GPUs, with the GPUs decompressing assets on their own. This bypasses the CPU, which under the current paradigm has to do the decompression and then send those decompressed assets to the GPU.This is misleading. You can do direct storage without sampler feedback. It's just harder to make use of the capability. I'm unclear whether DirectStorage proper requires it, but if it does, the causality is the other way around—that is, everyone was implementing it so it saw no point making it optional.
Ryan Smith - Wednesday, October 28, 2020 - link
This is a good point. Thanks!WaltC - Wednesday, October 28, 2020 - link
Really liked what AMD did last year with the simultaneous release of NAVI and Zen 2--available on the same date. Sort of wish they had done that this year, too. Ah, well, back to "hurry up and wait"...I also worry about the "starting at" pricing for the GPUs. A $579 2080Ti-killer is great--while a $700-$800 2080Ti killer is not so hot, imo. But If I can get the 70 CPU GPU (6800XT) for close to the "starting at" price AMD states, that is probably where I'm going!WaltC - Wednesday, October 28, 2020 - link
Above should be "70 CU" GPU...Arbie - Wednesday, October 28, 2020 - link
AMD was severely criticized by some of the press for that 2019 simultaneous release.boozed - Wednesday, October 28, 2020 - link
Damned if you do...Or more to the point, some outlets simply need to create outrage from any old thing.
smilingcrow - Wednesday, October 28, 2020 - link
Maybe they wanted Zen 3 to shine and not let RDNA2 tarnish it slightly by association?Not that RDNA2 isn't impressive but it's just that Zen 3 is two leagues ahead based on AMD's presentations.
They had to Sex up RDNA2 by including over clocked data which shows their insecurity.
When you consider the synergy between the two it does seem odd to separate them.
jakky567 - Thursday, October 29, 2020 - link
Honestly, RDNA 1 was like Zen 1 to me, finally something somewhat competitive at a reasonable value, RDNA 2 is like Zen 2, trading blows for different tasks, but can't claim an absolute crown. Hopefully RDNA 3 will outright be faster.dwade123 - Wednesday, October 28, 2020 - link
AMD is no longer cheap. When most of their fans are dirt poor... RIP AMD FANS!1_rick - Wednesday, October 28, 2020 - link
"most of their fans are dirt poor"Where did you get this ridiculous idea? AMD not offering any expensive consumer cards doesn't say anything about whether people can afford nonexistent expensive ones.
TheinsanegamerN - Thursday, October 29, 2020 - link
Probably from every AMD related comment section whining about prices day in and day out when performance being offered keeps going up. Seems all they care about is price.Spunjji - Thursday, October 29, 2020 - link
But the whining's not been from "AMD fans", as best I can tell - most of it's goalpost-moving by people who have "suddenly decided" to buy Intel instead.Qasar - Thursday, October 29, 2020 - link
where were those people when intel/nvidia were raising their prices ?if intel/nvidia do it, its fine, if amd does it, its wrong, and a federal offence.
Spunjji - Friday, October 30, 2020 - link
Pretty much. I can only speak for myself, but I've been whining about Nvidia's prices since Pascal released - and I had a pretty good whine when AMD slotted right into Nvidia's existing price structure with Navi 10, too 😅nandnandnand - Wednesday, October 28, 2020 - link
Wrong website.Hifihedgehog - Wednesday, October 28, 2020 - link
They will be releasing lower-tier RDNA2 products eventually, too. They are tackling the top end of the stack first, though, as we can see from the execution of their strategy. All good things in time, my friend.DigitalFreak - Wednesday, October 28, 2020 - link
Take that crap to the WCCFTech comment section where it belongs.raywin - Thursday, October 29, 2020 - link
It would be cool if we could down vote posts, particularly the fan boy ones. I agree, that guy should be deported to WCCFTechsonny73n - Sunday, November 1, 2020 - link
His comment belongs anywhere and he's right. If AMD fans could afford better performance cards, they would never become AMD fans.Spunjji - Monday, November 2, 2020 - link
"Derp derp derp, I am also a fanboy"Okay Sonny, riddle me this: How do you reconcile these claims with the hard fact that Nvidia's most popular GPU by a long way, the GTX 1060, directly competed with and cost the same as (or less than, if it's the 3GB variant) AMD's most popular GPU, the RX 580? How would it be that the substantially slower and significantly cheaper GTX 1050 and 1050Ti cards also rank higher than the most popular AMD option?
It sort of looks like most buyers are cost-sensitive, and that has nothing to do with which "team" you support. It's seems a bit like like Nvidia's most popular products are some of their cheapest. It's *almost* like you're full of shit 😅
Qasar - Monday, November 2, 2020 - link
Spunjji, um " almost " ???Makaveli - Wednesday, October 28, 2020 - link
lol speak for yourself.ahenriquedsj - Wednesday, October 28, 2020 - link
I will buy 6800 (Performace / RAM)
Spunjji - Thursday, October 29, 2020 - link
If it overclocks like it looks like it should, then it'll be the value king of the bunch. Depends a lot on whether they've put a hard power limit in.tosseracct - Wednesday, October 28, 2020 - link
Yay AMD and all that, but you dudes still don't seem to get it. Play with your little charts and benchmarks all you want, but in the end......It's this:
https://developer.amd.com/tools-and-sdks/
...VS. this:
https://developer.nvidia.com
There is literally no contest.
vladx - Wednesday, October 28, 2020 - link
Well said, until AMD invests billions in software they will always struggle against both Nvidia and Intel...Spunjji - Thursday, October 29, 2020 - link
Then why are they clobbering Intel in performance and not-exactly-struggling against Nvidia here?Unless you mean in terms of company size/profitability, in which case sure, but I don't factor those into my PC purchasing decisions.
inighthawki - Wednesday, October 28, 2020 - link
Sure, but not everyone cares about these things. If you want to do lots of machine learning with CUDA and stuff like that, nobody is stopping you from investing in nvidia. AMD is clearly targeting gamers, and what gamers care about are the benchmarks showing who has the highest FPS at the best price.TheinsanegamerN - Thursday, October 29, 2020 - link
Developers, corporations, and large industry, all fo which make a huge chunk of a processor companie's revenue, care about this stuff.Spunjji - Thursday, October 29, 2020 - link
Not really a relevant comment on a gaming GPU comment section, though.inighthawki - Friday, October 30, 2020 - link
Yes, except during the launch of these GPUs, if you actually watched it, they explicitly stated on multiple occasions that these GPUs are being targeted at gamers.mdriftmeyer - Wednesday, October 28, 2020 - link
Is that why Nvidia keeps getting spanked in recent Supercomputing contracts, because your AI/Machine Learning APIs are so vastly superior that these government facilities couldn't be bothered to use them?Tams80 - Sunday, November 1, 2020 - link
It's almost like those wanting supercomputing do care about price.MFinn3333 - Wednesday, October 28, 2020 - link
Because that is not their GPU SDK but rather their AMD64 & EPYC website?https://gpuopen.com/amd-gpu-services-ags-library/
Spunjji - Thursday, October 29, 2020 - link
When those "little charts and benchmarks" represent the games I'll actually be playing with the cards I buy - and when they show AMD competing *even without the benefits of huge investment in developer relations* - then I'll use them to pick whichever card gives me the performance I want at the budget I want and be happy with it.But oh, those goalposts, they can always be moved somewhere else for those who are motivated enough to carry them around all day. Feel free to make your purchasing decisions based on whoever spends the most sponsoring developers, if that gives you a sense of value 🤷♂️
vladx - Thursday, October 29, 2020 - link
Yes as a software engineer that does NN and AI work, there's no doubt i will choose the product with the superior software stack.Spunjji - Friday, October 30, 2020 - link
Good for you. Roughly what percentage of the *gaming GPU* market would you say you represent?maroon1 - Wednesday, October 28, 2020 - link
Does AMD ray tracing work on current games ?? For example can I use ray tracing in Control on these cards ??? Or nvidia RT is different and developers need to patch existing game to support AMD ray tracing ??Honestly I feel going with RTX 3070 is better choice since I can use DLSS2.0 and RT on the upcoming Cyperpunk 2077. Without DLSS, RT will not run with good fps anyway.
vladx - Wednesday, October 28, 2020 - link
Nope, no RTX compatibility yetSpunjji - Thursday, October 29, 2020 - link
I thought Control used DXR? If it does then AMD should be compatible - performance is another question, of course.xrror - Wednesday, October 28, 2020 - link
The thing is, DXR is part of the DX12u spec so assuming Intel ever gets anything out, game devs have motivation to support DXR even if they have to patch it in later, since in theory it will become the more common implementation.Because Intel still will be making integrated graphics right? So eventually DXR should be supported on even on the potato machines of the future.
That said, I never said it would run well - also I sincerely hope nVidia has a way to run DXR only games efficiently.
Again that info might be out there somewhere, but I sure don't see it brought up much.
inighthawki - Wednesday, October 28, 2020 - link
If the game is using the nvidia RTX APIs, then no. If the game is using DXR, then it should work.QChronoD - Wednesday, October 28, 2020 - link
W/o having an benchmarks to go on yet, it looks like they've overpriced the 6800 based on the raw performance numbers. The XT is 33% FLOPier but only costs 13% more. I know that actual performance never ends up being 1:1 scaling with theoretical numbers, but pricing it at $550 or so would seem to make more sense.QChronoD - Wednesday, October 28, 2020 - link
Edit: Actually you'd have to drop the price down to $499 to get a 30% difference. Plus then it would be more in line with the 3070 pricing.Spunjji - Thursday, October 29, 2020 - link
Yeah, I'm a bit confused by the 6800 pricing. I guess they're confident it will beat out the 3070, but it would have been nice to see them do that at a closer price to give a definitive reason to buy their product. I guess that's the downside of competing with a cut-down larger chip vs. having a smaller die like the 3070 uses.patlak - Thursday, October 29, 2020 - link
Yeah, but 6800 has 16GB of RAM which will be a huge difference going further.TheinsanegamerN - Thursday, October 29, 2020 - link
Given that 4GB is enough for 1080p as of yet (according to gamernexus testing) and 8GB can handle 4k without an issue, I doubt that the 8 VS 16GB argument will hold any water until these cards are several years old, at which point they will have two newer generations to work with.Spunjji - Thursday, October 29, 2020 - link
Some people keep their GPUs for that long, but I am a bit skeptical about its relevance. Comparing 1440p performance in modern games between a GTX 970 and 980M (same hardware but with 8GB RAM) I haven't been able to pick out a scenario where I could overwhelm the RAM on the 4GB variant without dropping the frame-rate too low on the 8GB variant for it to matter anyway.Thanny - Friday, October 30, 2020 - link
There are already games that use more than 8GB at 4K. There will only be more in the future.Pentium Peasant - Wednesday, October 28, 2020 - link
3080 appears to be the better GPU overall, at least on paper, with twice as many shader cores, 40% better FP32 theoretical performance and 320-bit GDDR6X memory (760 vs. 512 GB/s).Personally, I doubt the 6800XT would manage to surpass 3080 at 4K, or at any resolution for that matter. The rasterization performance simply isn't there.
It's sad that AMD's stuck in an endless loop of catch-up when it comes to GPUs. I was expecting more, to be honest.
mdriftmeyer - Wednesday, October 28, 2020 - link
Get used to being disappointed.catavalon21 - Wednesday, October 28, 2020 - link
I'm thrilled there's reasonable competition on both the CPU and GPU fronts, but let's be real - a company without the deep pockets of either and Intel or an Nvidia even competing on BOTH fronts is amazing, much less having us all hoping for a "de-throning".Gigaplex - Wednesday, October 28, 2020 - link
Comparing shader core counts between different architectures is meaningless. And if the Infinity Fabric does what they say it will, that may mean AMD has better effective memory bandwidth.inighthawki - Wednesday, October 28, 2020 - link
>> It's sad that AMD's stuck in an endless loop of catch-up when it comes to GPUs. I was expecting more, to be honest.Really? I was expecting *less*. I mean perhaps if the 6900xt was only able to match the 2080Ti or 3080 I'd agree with you, but it seems to be toe to toe with the 3090 while using less power, and for a cheaper price. It seems to compete everywhere except for in ray tracing.
I'm failing to see how essentially matching the performance of a competitors chip that was released only a month ago is a "disappointing catch-up." What exactly would have been reasonable to you? Do they need to have a solid +50% on top of the 3090?
Spunjji - Thursday, October 29, 2020 - link
Trolls gonna troll.Calmly say things that aren't true, react to the reaction, shill for your own team. Win win win.
raywin - Thursday, October 29, 2020 - link
it would be nice if we had some feedback system that would mark said posters as trollsSpunjji - Thursday, October 29, 2020 - link
They'd just pour more trolls in and make it an up/down pissing match.tamalero - Thursday, October 29, 2020 - link
"Number" of cores is irrelevant between technologies.Nvidia's own cores might have different power, efficiency and performance than AMD's.
Spunjji - Thursday, October 29, 2020 - link
Someone's still counting shader cores and "theoretical FP32 performance" even after it's been conclusively established that Ampere is deeply misleading in both aspects?Someone's still claiming a 4K victory for the 3080 even after we have benchmarks showing parity?
You were expecting to be able to use these lines to troll, so you did, regardless of the reality of the situation. Sad stuff.
Thanny - Friday, October 30, 2020 - link
The 3080 has 4352 shader cores. Don't believe the marketing lie that nVidia decided at the last minute to tell.Each shader can do either one INT32 and one FP32 per batch, or two FP32's per batch.
The 2080 Ti also has 4352 shaders, but can't do two FP32's per batch. Given that the 2080 Ti and 3080 run at about the same clock speeds (ignore the published boost clocks - another lie nVidia tells, as their cards automatically overclock beyond those figures), the performance uplift of the 3080 over the 2080 Ti is a rough proxy for how much of that extra FP32 capacity is useful in games. The best case seems to be about 30% averaged across a number of games at 4K, and less at lower resolutions. So you could say the effective shader count of the 3080 is about 5600.
The 6800 XT has 4608 shaders, and ~10% higher clocks. That shouldn't be enough to make it match the 3080 at the same IPC, so RDNA 2 must have notably higher IPC than Ampere.
Spunjji - Monday, November 2, 2020 - link
Solid way of putting it.Alas, the marketing has done its work, and the CA wildfires have prevented Anandtech from properly covering the details. No other publication seems to be particularly interested in correcting the nonsense, and the comment bots are doing their bit to propagate the narrative.
Hifihedgehog - Wednesday, October 28, 2020 - link
I feel like the RX 6800 XT is the best value proposition here. It undercuts by $50 and pretty much matches the RTX 3080 blow for blow. Like most of you probably do, I feel like the RX 6900 XT is more of a halo product than anything to assert dominance in TITAN/ThreadRipper like-fashion. On the other hand, the RX 6800 seems like it is priced a tad too high for my liking for it to make a meaningful impact. Sure, it outperforms the RTX 3070 and RTX 2080 Ti by a claimed 10-20 performance lead and it has a juicy 16GB, but it costs more. In terms of optics, hitting that $499 price point I think would have done more for AMD here to make inroads. If an RTX 2070 Ti comes in at $599, yeah, I will feel the RTX 6800 succeeded in the end. As it is, though, I feel like the crown jewel of the line up is the RTX 6800 XT.Hifihedgehog - Wednesday, October 28, 2020 - link
*RTX 3070 TiHifihedgehog - Wednesday, October 28, 2020 - link
Corrected: I feel like the RX 6800 XT is the best value proposition here. It undercuts by $50 and pretty much matches the RTX 3080 blow for blow. Like most of you probably do, I feel like the RX 6900 XT is more of a halo product than anything to assert dominance in TITAN/ThreadRipper like-fashion. On the other hand, the RX 6800 seems like it is priced a tad too high for my liking for it to make a meaningful impact. Sure, it outperforms the RTX 3070 and RTX 2080 Ti by a claimed 10-20 performance lead and it has a juicy 16GB, but it costs more. In terms of optics, hitting that $499 price point I think would have done more for AMD here to make inroads since $499 is the stretching point for many people's budgets, especially with the current world situation. If an RTX 2070 Ti comes in at $599, yeah, I will feel the RX 6800 succeeded in the end. As it is, though, I feel like the crown jewel of the line up is the RX 6800 XT. I am tentatively thinking of upgrading to it from my GTX 1080 Ti after the reviews (or enough leaked benchmarks) pour in.Magnus101 - Wednesday, October 28, 2020 - link
Without DLSS, in the games where it is enabled for NVIDIA it will mean that AMD will be totally crushed and therefore is nowhere near Nvidia.DLSS2 was a big hit for Nvida compare to the flawed DLSS1 and is, in my view, THE biggest reason/Selling point for the already powerful 3000-cards.
Hifihedgehog - Wednesday, October 28, 2020 - link
Like some, I don't use upscaling because no matter how proficient it claims to be, it is not native, so that is a non-factor for me.Tilmitt - Wednesday, October 28, 2020 - link
But isn't DLSS on quality mode actually outputting a better image than the target native resolution in many cases? No point in being ideological about this.TheinsanegamerN - Wednesday, October 28, 2020 - link
In ONE game, control, which is Nvidia's answer to Ashes: a tech demo dressed as a game.The rest of the games tested offered noticeable decreases in image quality.
schujj07 - Wednesday, October 28, 2020 - link
It is not possible for upscaling to produce a better image than native resolution. Looking at DLSS stills compared to native there are obvious differences. The upscaled images aren't as sharp.Spunjji - Thursday, October 29, 2020 - link
Not, it objectively is not - and you lose a huge chunk of the FPS benefits on Quality mode.To be clear, DLSS 2.0 on Quality mode looks *good* - but it's only comparable to native if you smear the naive res image with an inadequate AA technique. Personally I find the differences painfully obvious, to the extent that I'd rather drop some detail levels to raise FPS, but I'd absolutely use DLSS over having to run the game at a lower-than-native resolution if those were the only options.
Spunjji - Thursday, October 29, 2020 - link
It also so, so deeply subjective. Some people really like high levels of noise reduction on digital camera images, for instance - whereas I think it makes things look weird and cartoonish. DLSS (even in Control) looks like noise reduction algorithms to my eyes.Thanny - Friday, October 30, 2020 - link
No, it isn't. It's introducing fake information, so the image often doesn't match what it's supposed to look like, and it introduces temporal artifacts like flickering textures.In no way is the latest DLSS higher quality than native rendering.
Tams80 - Sunday, November 1, 2020 - link
You're also talking about a market where many buyers are trying for the absolute best performace. These aren't budget cards, hell some are on their own the price of almost some entire systems.TheinsanegamerN - Wednesday, October 28, 2020 - link
If I wanted to play in blur-o-vision I could run my games at non native lower resolutions, for free, with any GPU.Spunjji - Thursday, October 29, 2020 - link
Yeah, funny how everyone's super keen to compare fresh apples to bruised apples, just because bruised apples are only on sale from one vendor...Spunjji - Thursday, October 29, 2020 - link
Not exactly "totally crushed" - it's a trade-off you can choose to make. It's already been objectively demonstrated that most people can't readily pick a winner between DLSS and AMD's RIS when playing a game in motion, and the latter isn't game dependent.I favour RIS, because I really don't like the way DLSS resembles heavy noise reduction on a digital camera - but I acknowledge that it has its issues, like crawling and over-sharpening. Neither is perfect, but acting like they're not comparable is a bit silly.
Exodite - Thursday, October 29, 2020 - link
I actually feel the real competition/spoiler for the 6800 is the 6800XT, not the 3070.The 2070S were $100/25% more expensive than the 5700XT (locally actually more as Nvidia cards are marked-up more) while offering the same amount of VRAM and, at best, a single-digit performance advantage.
Comparatively the 6800 is $80/16% more expensive than the 3070 while offering *twice* the VRAM and a solid, double-digit performance increase.
Assuming the performance numbers and pricing pan out there would have to be quite specific edge cases to argue for the 3070.
More problematic is the 6800XT for only $70/12% more...
Spunjji - Thursday, October 29, 2020 - link
I'm Inclined to agree - it's out of my own price range, but if I had the money then that's where I'd put it.imaheadcase - Wednesday, October 28, 2020 - link
On paper AMD GPU always "look" good. That should be AMD slogan for its GPUs.raywin - Wednesday, October 28, 2020 - link
paper launch, false edition reviews should be nvidia's slogan. 3 generations of bsDigitalFreak - Wednesday, October 28, 2020 - link
Hell, their CTO is named Mark Papermaster so what does that tell you? LOLDigitalFreak - Wednesday, October 28, 2020 - link
Waiting to see if I can call him Mark Paperlaunch on Twitter...Spunjji - Thursday, October 29, 2020 - link
It tells you that his name is papermaster... You're not new to the Paperlaunch joke, BTW. Good luck with that projection.Spunjji - Thursday, October 29, 2020 - link
I'm very much enjoying the steady retreat of the usual suspects from "it will be terrible" to "you can't prove it will be good". On launch I suspect it will be "I already bought a 3090 lol".HurleyBird - Wednesday, October 28, 2020 - link
You can update the chart with the official AMD specs:https://www.amd.com/en/products/specifications/com...
Roseph - Wednesday, October 28, 2020 - link
Imagine it's like the whole 5700 XT fiasco...Pinn - Wednesday, October 28, 2020 - link
Why does 4K or even 8K need so much more memory? A frame buffer is ~128M at 4K. How many frame buffers are needed?Magnus101 - Wednesday, October 28, 2020 - link
4k have 4 times more pixels than 1080p.Pinn - Wednesday, October 28, 2020 - link
Sure, but that's ~96M more. Double or triple buffering still comes in at less than a G.Zingam - Wednesday, October 28, 2020 - link
They use higher quality textures for these resolutions. It looks like to get a really better results you need to have specific textures for specific resolutions.Pinn - Wednesday, October 28, 2020 - link
OK, so they must be modern games. You can run Quake 2 at 8K if you want, but it won't have the higher resolution textures.Gigaplex - Wednesday, October 28, 2020 - link
There's a lot more than just the frame buffer. 4x more pixels means 4x more pixels shading operations and the associated resources. Higher quality textures are also required to take advantage of the higher resolution.inighthawki - Wednesday, October 28, 2020 - link
Bear in mind the game is often not the only thing running on the system. You will also have the desktop compositor running at all times, plus apps like browsers (which are commonly run fullscreen). a double buffered swapchain in 8K is a minimum of 256MB *each*, so you can easily consume a few gigabytes just by having a handful of apps open.vladx - Thursday, October 29, 2020 - link
That completely false, otherwise you wouldn't be able to have more than 4-5 applications opened a decade ago when GPUs maxed at 512MB VRAM.inighthawki - Friday, October 30, 2020 - link
It is not, you simply chose to infer something I didn't say - namely that such a scenario wouldn't work at all.Video memory is virtualized and placed in system memory when there is pressure in VRAM. You just get less performance when it's there. Running on those 512MB GPUs is more than doable, but you'll see noticeable performance deltas. While different under the hood (i.e. the memory is still directly accessible, albeit at much slower bandwidth), conceptually it's quite analogous the old days when people didn't have a lot of RAM, and your computer slowed down as you needed to constantly page in data from the page file.
Try running an 8K display on a 1GB GPU and let me know how it works out for you after you start actually running more than one app on it at a time.
Tams80 - Sunday, November 1, 2020 - link
Games were less demanding then, and actually you couldn't run that many applications simultaneously without experiencing slowdowns.Just look up "ways to speed up your PC" from that time. One of the main recommendations was to close any unwanted applications.
It's only relatively recently that not worrying about what applications are open when doing something intensive (and applications for all sorts of different things) has become the norm. Hell, I still close many applications before launching a game or video editor out of habit.
Thanny - Friday, October 30, 2020 - link
A 4K frame buffer is 32MB.That's not what takes up the memory at higher resolutions. Various kinds of processing require a certain amount of memory per pixel to do. When you quadruple or double the number of pixels, memory requirements can go up by quite a bit.
Dug - Wednesday, October 28, 2020 - link
I hope driver improvements come through.They are so broken, especially for emulation.
twtech - Wednesday, October 28, 2020 - link
It's not clear why the author thinks the 6800 would be favored over the 6800 XT - it looks like it offers 75% of the performance, for 90% of the price. The 6800 XT is actually the better value in terms of performance-per-dollar, and neither of the cards are cheap.If you're going to spend over $500 either way, you might as well get the card that is quite a bit faster.
Intel999 - Wednesday, October 28, 2020 - link
The 6800 is 16% more expensive than the 3070 with up to 18% better performance. In the real world you will probably not find a 3070 at $499 for six months. Assuming you can find one to begin with.AMD will sell all the 6800s they can produce with good margins too. Actually they will sell all the 6900xts and 6800xts too.
There is no way AMD forecasted with the almost zero availability of Ampere in mind. With their shortage of wafers that are being spread thin by a current shortage of Renoir and the planned uptick in EPYC rollout they don't have spare wafers to just throw at higher production of NAVI2.
There will most likely be long periods of out of stock situations for these cards for the next six months. Actually because of demand well above AMD's projections.
It should now be apparent as to why Nvidia launched Ampere prematurely. Their insiders leaked the fact that AMD was going to match them in performance at the top end. Using less energy too.
After all, wasn't efficiency a Nvidia talking point ever since the 290X was the performance king?
Spunjji - Thursday, October 29, 2020 - link
Efficiency has always been an Nvidia talking point when they're good at it, and for the rest of the time it's "we have the fastest card". Never seen so many fanboys swivel their heads so fast as when Nvidia switch position on that metric.You're probably right about availability, though. It's going to suck for everyone for months.
TheJian - Wednesday, October 28, 2020 - link
Too bad, missed an opportunity to really tank NV here with 830mm^2 reticle limit while NV stuck at 628 on samsung. Your cards would be $1600+ and BEATING NV in everything, but you chose to lose. AGAIN. Bummer. You were even on the 2nd use here at 7nm so not even as risky as huge on new process. DUMB. Still chasing "good enough" instead of KINGS. You are 50w less because you are losing to a larger die. I'd be impressed at 50w less and WINNING.What's a 68000 card? Fix that headline? AMD reveals a 68000? :) For a second I was excited about a massive card...LOL. But can't be bothered to read the 2nd page now.
Jon Tseng - Wednesday, October 28, 2020 - link
Setting aside whatever yield issues you'd have at 830mm2, TSMC fab availability doesn't grow on trees. Give hol season demand from Ampere, Xbox, PS, AMD, Huawei last time buys unlikely you'd get enough volume to cover the development cost.Spunjji - Thursday, October 29, 2020 - link
It'd be transparently stupid, which is why AMD don't do it. Nvidia only get away with it because they tricked gullible fanboys into bootstrapping their datacentre business, and now they have a market that will pay any amount of money for a supersized die.TesseractOrion - Wednesday, October 28, 2020 - link
TheJian: You certainly never miss an opportunity to spout gibberish.Back OT: Excellent showing from AMD, better than expected.
raywin - Wednesday, October 28, 2020 - link
cool thing is, ppl speak with their wallet, and either the company can meet demand, or not.Qasar - Wednesday, October 28, 2020 - link
yes he does.thats what he dies best, spout crap and his anti amd bs
nandnandnand - Wednesday, October 28, 2020 - link
They should compete on die size when they do MCMs, so RDNA 3 or whatever.Oxford Guy - Wednesday, October 28, 2020 - link
Jian, we have duopoly. Duopoly greatly reduces the pressure of competition.Monopolies raise prices artificially and duopolies are only an improvement over that — certainly not the same thing as the cure (an adequate minimum level of competition).
People think the Intel—AMD competition is more exciting than it is because both companies have had phases of disastrous management.
Spunjji - Thursday, October 29, 2020 - link
"but you chose to lose. AGAIN"Yes, because making a card at 7nm reticle limit is so EASY and OBVIOUSLY GOOD! Why, pumping 400W+ into a GPU is just *TRIVIAL*. Fuck yields, what do they have to do with margins anyway? Screw the actual market, let's make cards that only posers can afford! 🤡
"But can't be bothered to read the 2nd page now"
Now you know how everyone feels about your usual comments 🤭
simbolmina - Wednesday, October 28, 2020 - link
I feel like they introduce a single card (6800XT) since others are meaningless with these price/perf. There is no point buying 6800 and 6900XT.K_Space - Wednesday, October 28, 2020 - link
AMD will match the product stack of the competiion. It's common sense. Of course quite few cards will come out using RDNA 2 - again matching the competiton product stack.Hxx - Wednesday, October 28, 2020 - link
oh boy those Ryzen money paid off for AMD over the last few years. They made some major investments to showcase these gains and while its nice to see how far ryzen has come, on the gpu side, i'm just mind blown. I was willing to bet money that they will not tie nvidia anytime soon but if those slides are true then wow it is very impressive indeed. Its gonna be a great time to build a gaming PC this holiday season provided that you find stock lol.raywin - Wednesday, October 28, 2020 - link
"find stock" lol x100GentleSnow - Wednesday, October 28, 2020 - link
“'more money than sense' category""At $650... this is the AMD card that mere mortals can afford."
Why is it people can spend thousands and 10s of thousands on golf, musical instruments, travel, a nice car, etc. and no one bats an eye (“His money, nothing wrong with that”)... but as an IT professional, digital creative and gamer, if I spend several hundred on a video card I'll use for years, I'm obscenely, irresponsibly wealthy?
Yes, top-end PC gear costs more than a pair of blue jeans. But how is even $1,000 something that has you picture enthusiasts laughing maniacally in their top hats and monocles while loaning Bezos money?
andrewaggb - Wednesday, October 28, 2020 - link
I hear that. I know everyone's financial situation is different but we're still talking about stuff measured in hundred's of US dollars. Even the halo products cost less than a nice smart phone and those are everywhere.catavalon21 - Wednesday, October 28, 2020 - link
+1Makaveli - Wednesday, October 28, 2020 - link
Most of the people whining still live at home with the parents. The rest of us that are working adults with careers understand there are far more expensive hobbies than Desktop Computers. $650 ain't nothing get car!Spunjji - Thursday, October 29, 2020 - link
Comparing GPU prices to another market area that has been subject to absurdly ballooning costs in the past 5 years isn't exactly a winning argument."Wow, people spend WAY too much on phones these days, I guess that makes GPU prices okay"
No ❤
Kurosaki - Wednesday, October 28, 2020 - link
If my kid wants to go to soccer team it's 100 USD / year, swimming classes? 100 USD. Archery? About the same. And so on. The 9800 GT coated below 200usd to meet the ATI top tier 5000 series cards. Then shit got crazy apparently...mdriftmeyer - Wednesday, October 28, 2020 - link
I don't know where you live, but those prices for the kids are dirt cheap.msroadkill612 - Wednesday, October 28, 2020 - link
That used to get me angry when the kids were little. We were OK, but i am sure there were kids who missed out cos the rich guys in charge wanted to ego trip on doing it the fancy way.a; ball, like color t shirts & sneakers should be fine. I figure $100 went on boots & uniform alone- mandatory items.
Oxford Guy - Wednesday, October 28, 2020 - link
Musical instruments don’t become worthless in a short time.Spunjji - Thursday, October 29, 2020 - link
People love finding ways to justify objectively poor financial decisionsinighthawki - Wednesday, October 28, 2020 - link
Let's not ignore the number of people who pay $1k+ a year for the latest iphone.ChaosFenix - Wednesday, October 28, 2020 - link
I wonder if the Smart Access Memory could be made available outside of gaming. If you have a GPU with 16GB of memory would you really need more than 8-16GB of system memory. I can see it being limited to workloads that use the GPU but honestly there are a lot of workloads that are GPU accelerated at this point it might have a larger potential impact than you might think.inighthawki - Wednesday, October 28, 2020 - link
While I'm sure there's some value outside of gaming, it's not really suitable for many types of tasks. Reads over the PCI aperture are usually painfully slow and can be on the order of single digit megabytes per second.Writes are typically more reasonable, but you're still talking about writing over PCIe to the device - still not suitable for general purpose. In games, this memory is commonly used for things like vertex buffers, or DX12's descriptor heaps where they are not read from at all and written to infrequently/in small quantities, and where the limited memory bandwidth is less important than the cost of setting up staging resources and instructing the device to make copies which would have even greater overheap and code complexity.
It also has some value when coupled with the new DirectStorage APIs since the PCIe IO space is directly accessible, which gives them more efficient access to the frame buffer to do direct copies over PCIe from the NVME controller. On platforms which dont support this, you'd have to shuffle memory around in VRAM so that the texture uploads can go to the first 256MB. That's also the best case scenario. In most cases you would have to create a staging surface and have DirectStorage copy there first, and then transfer to a non-visible region of VRAM which is an extra copy.
In non-gaming graphics applications, most things are not really necessary in real time, so it's commonly more efficient to just take the time and manually upload content to VRAM in whatever way is convenient. The startup costs are usually not that big of a deal unless you're repeating the task dozens or hundreds of times.
NextGen_Gamer - Wednesday, October 28, 2020 - link
@ Ryan Smith: AMD as posted the complete specs for range on their site now. This includes CU count, stream processors, TMUs, ROPs, Ray Tracing accelerator cores, Infinity Cache size, memory clockspeeds & bandwidth, etc. for all three cards. See here: https://www.amd.com/en/products/specifications/com...Gideon - Wednesday, October 28, 2020 - link
Official specs (all have 16Gbps memory)https://www.amd.com/en/products/specifications/com...
bill44 - Wednesday, October 28, 2020 - link
Is it too early to ask about connectivity?Can you confirm HDMI 2.1 full 48Gbps.
NextGen_Gamer - Wednesday, October 28, 2020 - link
@ bill44 - I noticed on AMD's specs page that it doesn't mention HDMI versions - which is curious. It does say that its DisplayPort is v1.4 with DSC support; but next to HDMI, it just says "Yes" under each card (like, duh). I'm not reading into it too much yet, but it does seem like if the cards had full HDMI 2.1 support that AMD would like to promote that front-and-center... We will see I guess.Strom- - Wednesday, October 28, 2020 - link
HDMI 2.1 with VRR is confirmed. It's at the bottom of the official page.https://www.amd.com/en/graphics/amd-radeon-rx-6000...
bill44 - Wednesday, October 28, 2020 - link
Thanks. I worried for a sec. VRR can be implemented w/HDMI 2.0b. How long before we see DP2.0? 2 years?vladx - Thursday, October 29, 2020 - link
Next gen hopefullyVitor - Wednesday, October 28, 2020 - link
I feel like if they had waited for 5nm, they would be definetly ahead of Nvidia, but maybe it would be too late to the market.vladx - Thursday, October 29, 2020 - link
Apparently TSMC's 5nm is not that much improved over 7nm+.Santoval - Monday, November 2, 2020 - link
That depends on how you define "improved"; do you mean transistor density or some other qualitative improvement. Per the article below TSMC's 7nm+ (called 7FFP in the article) has ~114 million transistors per mm^2 while Samsung's 7nm node has ~95 MTr/mm^2. On the other hand TSMC's 5nm node has ~173 MTr/mm^2 while Samsung's 5nm node has just 126.5 MTr/mm^2. In other words Samsung's 5nm node is barely denser than their 7nm node - it's basically a 7nm+ node rather than a 5nm node.Of course these are the highest transistor densities of these nodes which are usually employed for *mobile* SoCs and/or when the focus is on power efficiency rather than high performance. The node variants TSMC and Samsung use for high power GPUs and CPUs are *far* less dense. Nevertheless they are still representative of the average density of each new node (when the densest variant of a node gets denser its middle and lowest density variants will also get denser).
Tl;dr TSMC's 5nm node (at its highest density variant) has almost twice the transistor density of TSMC's original 7nm node, representing almost a full node. In contrast Samsung's 5nm node (again, its highest density library) is just ~33% denser than their 7nm node, i.e. it is merely 1/3 of a node. On top of that, per the article below, Samsung's 5nm node has quite a higher wafer cost than TSMC's 5nm node, despite being far less dense! For more :
https://semiwiki.com/semiconductor-manufacturers/s...
Oxford Guy - Wednesday, October 28, 2020 - link
$1000 plus tax for a 256-bit bus. Inflation at its finest.nandnandnand - Wednesday, October 28, 2020 - link
c o p eOxford Guy - Wednesday, October 28, 2020 - link
I hope AMD buyers will when Unreal Engine and Final Fantasy are round to have been borking Infinity Cache with real, carefully disguised, zeal.Makaveli - Wednesday, October 28, 2020 - link
If it had a 1GB cache on it on a 192bit-bus you would still be complaining. At the end of the day what matters most is performance.Spunjji - Thursday, October 29, 2020 - link
People bought Pascal at high prices. Then they bought the RTX 20 series at even higher prices. The ceiling was smashed and now we're all bathing in glass. Yaayyyygagegfg - Wednesday, October 28, 2020 - link
when the ryzen 5000 review???FwFred - Wednesday, October 28, 2020 - link
Funny enough, after following this launch I was able to find an RTX 3080 at Best Buy, so I'll be on team green for the next few years.K_Space - Wednesday, October 28, 2020 - link
As predicted:https://www.mooreslawisdead.com/post/nvidia-s-ulti...
Hifihedgehog - Wednesday, October 28, 2020 - link
Reality check: It's not in stock. That was totally a one-off at that random guy's Best Buy.https://www.bestbuy.com/site/nvidia-geforce-rtx-30...
Spunjji - Thursday, October 29, 2020 - link
It's funny how everything he wrote is being borne out, but Nvidia fans won't hear and don't care.raywin - Thursday, October 29, 2020 - link
he seems to have very good sourcesmdriftmeyer - Wednesday, October 28, 2020 - link
In case you need a reminder, no one cares.Lord of the Bored - Wednesday, October 28, 2020 - link
Reminder that you can flip it on eBay for a profit now, and buy one for personal use in two months when the circus has left town.AMDSuperFan - Wednesday, October 28, 2020 - link
This will easily be twice as fast as anything Nvidia has with those specs. I wonder if it is a good launch or maybe next year we will see some?msroadkill612 - Wednesday, October 28, 2020 - link
Re the seeming "decadence" of these price tags for a "gaming toy", its worth noting that there is a global trend for urban youth to no longer aspire to a car.Around here, $650 is little more than half the annual fixed ~taxes alone.
msroadkill612 - Wednesday, October 28, 2020 - link
AMDs lower power could be a more effective sales influencer than it seems. ~600w seems sufficient for a 6800xt, but the competition would need an expensive psu upgrade.Revv233 - Wednesday, October 28, 2020 - link
Bummer about the price. Looks like I'll keep my 5800xt... If I had an older cards I'd be looking at the.... 5800xt LOLSpunjji - Thursday, October 29, 2020 - link
5700XT?I'm on a 580. I'll be looking out for the 6700XT, then deciding whether I get that or a discounted 5700.
K_Space - Wednesday, October 28, 2020 - link
Will SAM work with Zen 1 and Zen 2? Article only talks about Ryzen 5000 series.Ryan Smith - Thursday, October 29, 2020 - link
AMD has only confirmed the Ryzen 5000 series at this time.SanX - Wednesday, October 28, 2020 - link
Looks like no new HDMI port to support 8k 60 Hz 4:4:4.Not future proof card.
Losers
Smell This - Wednesday, October 28, 2020 - link
Y A W N
Spunjji - Thursday, October 29, 2020 - link
They have HDMI 2.1, dingus.Tilmitt - Wednesday, October 28, 2020 - link
"Like DLSS, don’t expect the resulting faux-detailed images to completely match the quality of a native, full-resolution image. However as DLSS has proven, a good upscaling solution can provide a reasonable middle ground in terms of performance and image quality."Doesn't DLSS in quality mode actually have better image quality than native while still running faster?
ChrisGX - Wednesday, October 28, 2020 - link
DLSS is an upscaling solution and Ryan was talking about upscaling from lower resolution images to higher resolutions (which is natural because that is what upscaling is). Now, ask yourself whether DLSS that has upscaled from some lower resolution to a higher resolution image can beat the quality of an image rendered in that higher resolution, natively. The answer is obvious and it follows from DLSS being an upscaler. The best high resolution images require the best graphics cards whether DLSS is available or not.Spunjji - Thursday, October 29, 2020 - link
It's the new "micro-stuttering" - take a thing that exists, be totally wrong about what it actually means, act confused, spread FUD.Spunjji - Thursday, October 29, 2020 - link
No, it doesn't. Not unless your idea of "better image quality" is "smoothed out with weird artifacts".I wonder how many people are gonna make this exact same post?
eastcoast_pete - Wednesday, October 28, 2020 - link
Thanks Ryan! Question: Any news or rumors on yields of Big Navi GPUs? Here is why I ask: A few weeks ago, SONY announced or admitted that they wouldn't be able to ship the expected number of PS5 units, Now, the PS5 GPU clocks similarly high as the Big Navi Cards. SONY's shipping problems are apparently due to lower than expected yield of their APUs; in contrast, MS supposedly doesn't have that problem, but the Series X clocks significantly lower than the PS5. Thus, my question: does AMD have enough yield of high-clocking GPUs?Ryan Smith - Thursday, October 29, 2020 - link
AMD has not (and likely never will) disclose anything about Navi 21 yields.Thanny - Friday, October 30, 2020 - link
Sony never announced anything like that. What they did announce is that what your just repeated was a false rumor.SagePath - Wednesday, October 28, 2020 - link
This is what happens when actual engineers run a tech company. Hard work makes competent professionals, which grows the skills they have, and soon enough, are able to perfect their skills and do remarkable things. I'm waiting on the reviews like everyone else, but I have confidence that Dr Lisa Su and her team can pull this off. They did it with the CPU wars, why not with the GPUs as well!!wr3zzz - Wednesday, October 28, 2020 - link
I think it's fair for AMD to turn on Rage Mode for apple to apple comparison. I have a RTX 3090 and GPU Boost is on by default. I had no idea how aggressive it is until just out of the box it CTD all my gaming sessions. It's only unfair if GPU Boost is off as default from Nvidia,Spunjji - Thursday, October 29, 2020 - link
It really does depend whether or not their Rage Mode numbers are representative of what most people will actually get out of the card.Thanny - Friday, October 30, 2020 - link
I tend to agree, but the two aren't really comparable. Rage Mode is apparently nothing more than a one-button power limit increase, meaning it's not actually overclocking at all - it just lets the boost clock go higher for longer than it otherwise would. nVidia's GPU boost, however, is a true overclock, as it routinely goes well beyond the published boost clock speed, making the latter value meaningless.What they actually did for their 6900 XT vs 3090 comparison was give the former more power to play with, and provide the benefits of Smart Access Memory. The latter seems to be able to provide more of an uplift than Rage Mode, though it varies considerably from one game to the next.
I would have preferred that they show out-of-the-box values for their comparisons. Using Rage Mode increases power usage (obviating the power advantage they're pushing), and SAM requires a brand new CPU that not everyone can reasonably even use once it's released (AM4, like Intel's consumer platform, is for toy computers, and doesn't have enough I/O for a real setup).
blkspade - Sunday, November 1, 2020 - link
If purchasing a RTX 3090 is a reality for anyone who would consider the 6900XT, they could literally build a X570/5600X system with that $500 savings and get to use SAM. That is assuming they are transitioning from another DDR4 based system. These products are targeting gamers as a priority, for which X570 offer plenty of IO for most. Few people need more than 3 PCIe cards of any type, and/or more than 2 NVME drives. After a GPU or two, most other things use way less bandwidth, and USB 3 is still an option. Even if you need something more extravagant there are options with built-in 10Gbe and thunderbolt, allowing you to still keep slots available.Spunjji - Monday, November 2, 2020 - link
"AM4, like Intel's consumer platform, is for toy computers"What absolute rot
tygrus - Wednesday, October 28, 2020 - link
RX 6800 XT will probably have a board power closer to 280w and 270w during gaming compared to the nominal 300w.Frame buffer of 128MB written 60 times per second and then read to output = 15GB/s.
If the infinity cache is only used for the frame buffer then it's worth about 15GB/s raw bandwidth or equivalent to adding 30GB/s to the GPU RAM. If we are just talking about a last level cache & use determined by game/application then it could be worth much more. Memory compression can then be used to improve the usable bandwidth. Ray tracing and special effects that re-read & modify the frame buffer may make more use of it. But there is a limit to it's usefulness. Microsoft using several GB with extra bandwidth in their next console design so 0.125GB seems very small.
The RX 6800 XT seems to be the sweet spot of power & performance with the non-XT not far behind. Availability will be the key. Nvidia RTX 3080 & 3090 have a 6+ week delay from launch (after initial stock ran out) and the waiting list grows 2 weeks for every new week of orders until demand falls and/or supply increases.
Yamazakikun - Thursday, October 29, 2020 - link
What do we know about how well these cards will work with Photoshop, Lightroom, etc.?GeoffreyA - Thursday, October 29, 2020 - link
Brilliant work, AMD. On a side note, I wonder whether RDNA2 will ever find its way into the last of the AM4 APUs. Zen 3 + RDNA2 will be pretty nice for people coming from Raven Ridge or Picasso.Spunjji - Thursday, October 29, 2020 - link
Not for AM4 - the first RDNA 2 APU will be Van Gogh. The desktop APU with RDNA 2 won't be due until AM5 arrives, and it'll benefit from the additional memory bandwidth.GeoffreyA - Thursday, October 29, 2020 - link
Was hoping, against hope, these GPUs would make it onto AM4. Oh, well, as long as there's a Zen 3 APU, carrying either Vega or RDNA, I'll be quite happy. I'm on a 2200G, but it runs well, and I've got no complaints. Just wish encoding were a bit faster at times. And as for the Vega 8, happy with it too, because I don't really play games any more, except for some older titles I missed and favourite classics. I am looking forward to the mythical HL3 though :)Spunjji - Friday, October 30, 2020 - link
Cezanne's what you'd be looking for! Zen 3 + Vega. Hopefully this time it'll get a release on desktop. No guarantees that it would work in an existing motherboard, though.GeoffreyA - Friday, October 30, 2020 - link
Thanks, I'll keep an eye on Cezanne and perhaps get a 6C/8T one in a few years' time. Looking forward to that Zen-Zen 3 IPC boost and higher clocks. Motherboard is B450 Tomahawk. Hopefully it'll get support.GeoffreyA - Friday, October 30, 2020 - link
* 12TSpunjji - Monday, November 2, 2020 - link
Desktop Renoir is getting support on B450, and so is Zen 3, so *in theory* Cezanne support should happen. Good luck! :)GeoffreyA - Tuesday, November 3, 2020 - link
Thanksjun23 - Thursday, October 29, 2020 - link
It is such a coincidence that the CEOs of both Nvidia and AMD are from the same city in Taiwan and are almost the same age!webdoctors - Thursday, October 29, 2020 - link
?? What are you talking about, their not in the same age bracket, one is the uncle of the other. Your math is terrible.Thanny - Friday, October 30, 2020 - link
Their ages are 7 years apart, and they are not related.Cooe - Thursday, October 29, 2020 - link
"Rage Mode" isn't really an "auto-overclocking" feature at all. AMD explained to a bunch of major tech press (like GamersNexus) that is simply slightly bumps up the power limit & fan curve, but does NOT touch the actual clock-speed setting at all. This is good according to AMD for a +1-2% performance bump on average. So basically, the RX 6900 XT, really is just THAT fast. Even without Rage Mode, it'll still be neck & neck.Vitor - Thursday, October 29, 2020 - link
I wonder if AMD could make a super 6900xt next year, with hbm2 memory and on 5nm. Raise the price to 1300 and it will be still cheaper than the 3090scineram - Monday, November 2, 2020 - link
No.Santoval - Sunday, November 1, 2020 - link
How can the 6900 XT be roughly as fast (or slightly faster) as the RTX 3900 in games when the latter card has an astounding 15 TFLOPs of higher FP32 performance? Are the Infinity Cache, Rage Mode and the Smart Access Memory really enough to compensate for the much lower (nominally anyway) FP32 performance?Or do these games also support FP16 precision (at least partly for shading some fast action scenes where higher FPS are a higher priority to quality), where I assume Navi 2 still has twice the performance while consumer Ampere offers ZERO extra performance (since Jensen Huang would hate to not sell to even a single AI training or inferencing customer their over-priced professional cards...)?
Spunjji - Monday, November 2, 2020 - link
That FP32 performance quoted for the RTX 3900 is a theoretical peak that can *never, ever* be reached in actual gaming scenarios, because it's only attainable when there are no INT32 operations occurring in the shaders.In other words: RDNA 2 doesn't have to match Ampere's quoted FP32 to match its performance in games; it only needs to generate around 2/3 of the TFLOPS to reach a similar level.
Santoval - Monday, November 2, 2020 - link
Are INT32 operations that common shaders? I thought Nvidia added INT32 shaders only recently (in Turing I think). If they are that common why did Nvidia remove them from Ampere?Santoval - Monday, November 2, 2020 - link
edit : "common *in* shaders"..mdriftmeyer - Monday, November 2, 2020 - link
Yes.Samus - Thursday, November 5, 2020 - link
Woof.