I don't know, AMD is more likely to give consumers a fair price or discount. And with the launch of Nvidia's cards already, they know what they're competing against both price-wise and performance.
I feel like the RX-7000 is going to be great, but it's not going to have as much uplift as the RTX-4000 series.
But we don't know everything, especially with the cryptomining mostly behind us. Who wins is a different answer depending on your perspective.
He meant both of the AMD GPUs not Nvidias. Lol. No Nvidias prices are insane and laughable compared to this. I bet this will destroy the 4080 and that for 200$ less. Nvidia should quickly adjust pricing on all their cards, and fix their broken power connectors that are just a train wreck. Even cables burn, no adapter used.
Nvidia don´t have to reduce prices, because their GPU sell well enough even if they are much more expensive! I am sure that Nvidia will release 4080ti that is about the same speed as 7900XTX and cost $200-$300 more (Nvidia tax) and reduce 4080 price to the same as 7900XTX and 4080 will sell more than 7900XTX even it is slower, because it is Nvidia... "Buy the real Nvidia at the same price as the imposter!" "The more you buy the more you save!" Nvidia marketing will win also in this time, just look the sell numbers.
But it is good to the rest of us that AMD did make reasonable priced GPUs so we don´t have to buy Nvidia gpus.
That’s your opinion and nothing more. The 4080 is certainly too expensive if it’s slower than 7900XTX and XT, so no, I don’t agree. Many people will share my opinion. Only the 4090 as a halo product can be overpriced, the rest can not.
The thing is the numbers play out this way. Whether you agree or disagree is irrelevant; nVidia sell more cards even when they don't make as much sense.
This time it does seem like a more dramatic value advantage than in last year's, so clearly AMD are hoping to really start their "Zen moment" for RDNA, and I think they will definitely sway more of the market than usual. But at the end of the day, nVidia have a lot of (in my opinion undeserved) mindshare. That's hard to compete with.
AMD is getting their Zen moment for RDNA3 right now. The performance is there, the drivers are mature, and the chiplet design is significantly cheaper to produce than what nvidia is offering. There is an added benefit of AMD being power efficient, which is looking like it will be a factor in some areas.
nvidia likely cannot lower prices all that much, if they keep using monolithic dies with the latest and greatest process node from TSMC. nvidia doesn't have a chiplet design for at least 2 generations of cards, and this is likely going to tank them.
The enterprise market hasn't disappeared, and currently they dominate there. Plus all the things you mentioned don't matter as much in those spaces. So no I don't think it will "tank them" any more than AMDs mistakes back in the day "tanked them".
AMDs Zen Moment was already with RX 6000, as it brought parity to Nvidia. To be precise, it was like their Zen 1 moment. A Zen 3 Moment still didn’t happen, as AMDs still not faster. RX 6000/7000 are at best comparable to Zen1, Zen 2 was already better than Intel in many regards as it had way more cores, just not in purely gaming but fast enough. That is not happening here. If anything, Radeon is like Zen 2 gaming wise while Instinct is very competitive to Nvidia or faster, so all in all it’s not comparable to the CPU space. For me a true Zen moment would be if they can surpass Nvidia, didn’t really happen unless you only care about 1080/1440p gaming.
So on 7900XT_ GCD to 6 x 16 MB SRAM latency issue AMD adds HBM lanes to the commercial memory controller option supporting Dual [x2 GCD] on interposer? mb
…Chiplet design is an implementation detail that doesn't matter to end users. If Nvidia's performance is still outstanding with GPUs in a class of its own, Nvidia will continue to have a healthy business prioritizing enterprise-level, professionals, and prosumers being a premium GPU company that provides premium GPUs at all GPU segments of their business.
Those audiences are a sure thing during a recession compared to price-conscious consumers. For that reason, it is no surprise Nvidia sold out of the 4090s they were willing to make to be happy with the ROI at the price they asked for them.
Price-conscious everyday consumers, especially around Black Friday, are not reliable audiences to ship things to at the end of the year to get a predictable return. Accordingly, AMD and Nvidia prefer to ship high-end GPUs to begin a GPU generation to get predictable return shipping cards specifically for them. Hasty average consumers buying such cards well beyond their usual purchasing power is merely a nice bonus.
AD at 67 M units which is the minimum NV volume over 20 months where the first unit is priced at cost decreases 1/67 millionth every unit until the last unit produced is sold price at cost. mb
As long as sheep are there, that are mindless and buy everything, yes. But sheep aren’t shopping for 1200++ cards so you are wrong. 4080 might not be such a huge seller, it’s overpriced as hell. 500$ extra over last Gen and a smaller chip not the biggest anymore.
4080 don´t have to be big seller! Nvidia can sel 4070 and 4060 to those who only have $700-900 for GPU and they can sell 4050 for $500 for those to whom $700 is to much... and if $500 is too much they can always sell 1060 or 1030 for $300... They don´t care what model customer buy, as long as it is Nvidia GPU. They make big profit with each of them and as it has been said. People buy those even at high price more than they buy AMD GPUs. It is just a fact that you can see by looking any statistics...
We will see what will happen, you don’t have a crystal ball. AMD priced 7000 series low with higher performance than RTX 4080 to disrupt Nvidia. And many people have already praised that decision, you’re not that well informed, bro.
Not really my problem if you’re too low iq to understand my point. My point was never about money, my point was that people with more money usually also have more brains and will easily see that RX 7900 has more performance and often rather buy that. But we will see, no use arguing with fanboys.
Ayushman Skin & Cosmetology foresaw this need to be catered by a team of experts who have experience in plastic and cosmetic procedures as well as invasive surgeries.
Allocations are based on presumed demand. Nvidia tried to drop some of their TSMC allocations because they new demand wouldn't be as high on the 4000 series. The goal appeared to be to introduce some artificial scarcity in place of the cypto scarcity. This would allow them to price the cards high and get some really good margins. TSMC said now, Nvidia still priced their cards high, and now we are in a situation where they very well may not sell out.
Nvidia did move game GPU production to enterprise GPU production, so they did actually reduce the amount of their gaming GPUs. TSMC does not care what Nvidia produce as long as their allocation remains in deal.
Actually you’re wrong. You’re just extremely ignorant and living on your own small moon in a lonely part of the universe. Everyone and their dog says that the 4080 is extremely overpriced. Wake up.
Maybe everyone and their dog around YOU think that way. Nvidia and AMD run in a global market. Neither card is expensive for the demands of North America based on what people not just cryptominers were willing to pay.
Nvidia stock is what they're willing to make; if they sell out every GPU they're willing to make at their current prices, they're happy unless their stakeholders pressure them to sell more than that.
" Maybe everyone and their dog around YOU think that way. "
nope, same around here where i am as well. 4090 starts at $2220 and tops out at $2780 the 4080s are more then likely going to be at best $500 less then that here to at the entry level. and probably over lap the entry level 4090's. over all rtx 40 series is overpriced across the board so far here. add to that the melting power connector, and the interest for a 40 series rtx, has dropped quite a lot, and most are now waiting till December to see what the radeon 7900 series brings
4080 16 at $1199 is priced appropriately on q1 risk production before volume ramp marginal cost decline to peak production volume at which time the card can be sold for $945 into run down. The reason 4080 16 seems overpriced at q1 risk production volume is because 4090 FE for $1599 was at that time $300 underwater and anyone who got one for $1599 took Nvidia for $300. AIB dGPU price through the broker market is the only real price on segment by segment, on supply and demand application competitive perfect price, on ROI, for the application workload. mb
This makes no sense. They control the market and the pricing. Outside of pressure from competitors; we can only make suggestions to them by voting with our wallets.
All I know is that AMD has gone from their dark days (bankruptcy) to their (highly valuable) bright future. It's all because they offer products that are valuable.
Remember the era with Bulldozer vs Intel Core-i, from Gen 1-Gen 6 it was a massacre. Felt like they completely relied on their "ATi" division at the time for GCN 1-3. Then they were left behind by the early Nvidia-900 to the late Nvidia-1600 series, during which time they had their "Zen" moment. It felt like RDNA was the same Zen moment for their GPU division, and this is their Zen3 moment. History rhymes.
It is ironic how everyone freaked when AMD bought ATi, only for ATi to be the entity that kept the lights on. There was nearly a solid decade of undesirable AMD CPU's, essentially from the Phenom\Athlon X4 up to the launch of the first Zen parts...Intel just killed them after the Netburst era.
All that said, I think this incredibly innovative card, regardless of its performance, takes a hardline on nVidia's statement about 'GPU's will continue to cost more every generation' by REDUCING the launch prices gen-over-gen. That's a very important PR move for AMD, and historically this works very favorably in the gaming industry, a tactic Sony has used to trump Sega and Microsoft in overall sales.
Still have most of mine and for those into virtualization and ECC, AMD was less of a headache than Intel and their "market differentiation" aka milk the consumer.
I had the Phenom 2 x6 1090T which happily sat at 4ghz, but it wasn't until you overclock the NB and HT Link and pushed the Ram up that these chips started to breathe.
Eventually I went the Core i7 3930K as I wanted more, but that chip is still humming along fine in my grandmothers machine.
‘ Once bulldozer arrived, the FX 8120 was a decent part that overclocked extremely well. Sure, they couldn't beat Intel, but they were good parts that lasted.’
I don’t know what market AMD was targeting with Bulldozer. Piledriver used the same number of transistors as Sandy Bridge E and for what?
Everything about the design was substandard. AMD even broke AVX with the Piledriver update.
AMD not only couldn’t beat Intel in performance, it couldn’t come close in efficiency.
It seems the only market the BD parts worked well enough in was the GPU-centric super computer, where the inefficiencies of the BD design could be more masked.
Bulldozer was disastrous. But it was, and is, one of the most innovative, original CPUs made. AMD, stunned how the Core microarchitecture was thrashing them, tried to come up with something completely different. Unfortunately, that doesn't usually work in this field, the principles of how a good CPU is made having already been laid down. Bulldozer was ambitious though; it aimed to bring them back, but failed to deliver. (At least on the single-threading front.) Yet, through its four iterations, AMD improved the design, raising IPC and dropping power consumption. Seemingly, it taught them a lot, and Zen benefitted. Compare the efficiency of Zen 3 to ADL or Zen 4 to RPL.
Also, they were stuck with it, CPUs taking time to be designed; and according to Keller, there was a great deal of rubbish going on behind the scenes, as well as a despondence that they couldn't aim higher.
"AMD not only couldn’t beat Intel in performance, it couldn’t come close in efficiency."
That's true. But what happened to those efficient CPUs today? The tables have been turned, and paradoxically, Bulldozer partly created Zen.
The problem AMD had with Bulldozer was a lack of market position to get people to adopt it's design into their software. It couldn't hold a candle to comparably priced Intel chips at the time, even older Core 2 Quad's, that stuff was optimized for.
The sad irony is AMD's greatest gift to x86, 64-bit memory allocation, was so ahead of its time that they were way behind when it went mainstream with Windows 7, the first OS to have initial adoption at 64-bit over 32-bit.
Construction era A9 Acer client here great graphics big screen client but no faster and slightly slower than my 2007 Core 2 Duo mobile give away sludge to HP by Intel in sales package because it was such a computation dog and this isn't much better, yes typing right here (avoid large Excell dB) but ok as a client device and it has its own storage. mb
It's incredible that people credit ATi for keeping the lights on at AMD, forgetting that the multi billion dollar boat anchor that was ATi was the reason AMD struggled to release new CPUs for the next decade....
It takes years to design a CPU. Once AMD started down the CMT path they were stuck there for quite a while. For a long time it was the GPU business that kept AMD afloat.
Any debt AMD took on to buy ATI was AMD's fault not ati's. AMD made mistakes with their CPU architecture and their fusion project. I don't know what input the ATI people had which swayed or didn't sway AMD one way or another on fusion, but i think creating fusion was AMDs whole reason for buying ATI, So AMD are primarily responsible for it and for the compromise of the GPU architecture that resulted from it. But I think the biggest contributor to AMD's misfortunes was their failure with their fabs. Even after they sold them and got cash for them they were still pinned down by the wafer supply agreements they signed as part of the deal. It took them years to finally extricate themselves from their fabs failure. But the offloading of the fabs was probably the most important part of AMD's turnaround. Otherwise they never would have had the money to dig themselves, especially their fabs, out of the hole they found themselves in.
The only boat anchor was GloFo and AMD's CPU business. ATI dominated consoles after the Gamecube and has ever since. The integrated hardware division of AMD (specifically GPU's) had a higher gross profit than every other sector for YEARS.
This. The worst part about all of this, is it was completely avoidable. Firstly, the immoral and illegal practices that Intel did by having their customers (partners) buy their processors and effectively block AMD in the laptop-space and desktop-space. Well it hurt AMD's reputation, plunged their revenue, and negatively affected their R&D capabilities. We're talking billions at a minimum and in a crucial period. Had that not occurred, perhaps AMD could have either saved GlobalFoundries or have released them years earlier (fabless semico designer).
Secondly, it's about Microsoft. AMD's designs also suffered due to software bugs and optimisation issues in WindowsXp and Windows Vista. Not just day to day stuff, but professional applications and games too. The sad part was, Microsoft acquired most of the workers and business from Sega, when they were planning a Windows Gaming Console. They initially designed the original Xbox with AMD. Yes, they had full working prototypes and everything, that's even what they showcased on-stage. But the ego within Bill Gates couldn't be quenched, and he cancelled the deal, and signed on with Intel and Nvidia simply because they had better "street cred". Had the original Xbox gone mainstream with the AMD hardware, it would've been slightly weaker but much much cheaper for both the Consumers and Microsoft as well. It would have slingshot AMD's revenue back to the black, their reputation would have increased, and software for AMD would have been much more optimised.
In fact, Microsoft couldn't build the Xbox 360 afterwards because both Nvidia and Intel had raised their prices by several fold. AMD was bankrupt at the moment and could not bid on the tender. Microsoft was left to abandon the Xbox 360 project. They really wanted to stick to their x86 design and push to all the large development studios their Direct3D API. Luckily they were able to make a successor by using a new processor designed by Sony that they were able to "borrow" due to a license loophole from IBM, but it meant building a new unique Operating System, and having to resort to emulation for backwards compatibility (instead of native/seemless support).
In a better timeline, Microsoft would've championed the 1-core Athlon 64bit in their console, AMD chipsets would have gotten much more optimised. Microsoft would be stop Intel from blocking AMD chips being sold to OEMs. With new R&D, AMDs next processor, The Phenom 1, would have had more cache and more bandwidth, putting it ahead against the first Intel Core. The Xbox 360 would have adopted this, a dualcore x86 console, with seamless backwards compatibility, and a large revenue source. Not to mention AMDs support of Linux. AMD could have sold GlobalFoundries and bought ATi rather at the same time. They could have shifted their low-priority 90nm-45nm products there (Motherboards, low-end CPU, low-end GPU) and ordered their high-priority 28nm-20nm products (high-end CPU, high-end GPU) from TSMC instead in the late 00s. They would have been competitive, and we may have even seen Zen1 come to the market way earlier, like in 2012-2014 instead of 2017-2018. Five years and Billions of dollars is nothing to sneeze at. It would have meant that the "Xbox 720" would have happened sooner as well, all of these positive loops, making AMD a better product and company overall. That in turn means a much healthier competitive market, instead of the three Microsoft+Intel+Nvidia giants. We may have AMD partnering with Valve, and see the rise of the Steam Deck years ahead.
Yea Intel was bad at 15 USC 1, 2 and 18 USC 1962(c) and 18 USC 1956 where MS is explicitly tied to Intel contractually it takes two or more signatures and horizontally in combination through 2001 on Intel Inside tied charge back tying Wintel to the channel and to media sales preview for kickback known as tied registered metering. mb
Not quite. Phenom II was pretty decent and I used it for a good while. Bulldozer and later were trash aside from the APUs that were fine. So you’re wrong on multiple accounts.
I must agree with Khanan. Phenom II took the lead for about a month until Intel's next gen took over again. That was a solid series of chips. Sadly, the original phenom launch was disappointing much like the bulldozers. Had they launched Phenom II instead of Phenom I, they would have regained ground they'd lost after the original Athlons were passed up.
The original Phenom was cache starved on the L3, I believe, and Phenom II doubled or tripled the cache, whichever is correct, throwing open the doors to its performance.
Another issue for Phenom 1 was the disastrous 65nm process and AMDs insistence on making the quad cores monolithic dies instead of going the Intel route and just slapping two dual cores together.
The presentation started off really slow but scott really brought the energy and the guy after him what a finish. I was not expecting all those extra features and the price. I am for sure going to upgrade from my 6800XT I was going to go 7900XT but the 7900XTX for only $1,000 is a steal I may have to move up.
The best moment was when he ridiculed Nvidia for the power connectors that are just terrible. Nvidia trying to be so Apple like, using small pcbs using small connectors led to this disaster.
I was expecting about USD $1300 and a -10% performance against the RTX 4090.
With this USD $1000 launch, my expectations are a bit more dull. I'm expecting the XTX to trade blows with the RTX 4080, but overall be the better card. Minus the exclusive features of RT and DLSS-3.
You base your expectations solely on price? So you’re one of those guys who think price is everything. Hahahaha… can’t tell you how many times I bough excellent products that cost way less than others, you couldn’t be more wrong. The 7900XTX is competitive with the 4090. For 600 less.
Where do you guys come from? I don't understand how the tech review site with the most doctorates on staff has, by far, the dumbest comment section... competitive with the 4090, how? Looking at basically the one and only game that is indicative of where graphics are going (Cyberpunk), the 7900 needs FSR to equal what the 4090 does at native resolution... and once DLSS comes into the picture, its in an entirely different, lesser realm with absolutely no hope of making up the enormous gap.
It is clearly a large step behind, which is why it costs far less. AMD priced it precisely where they needed to. There are no bargains in graphics anymore - you get exactly what you pay for.
Where I’m coming from? I’m clearly higher iq and more informed than you are. There are already multiple reviewers that stated that 7900XTX maybe only 10% behind 4090. For that they used information gained by the official presentation of AMD and compared it to own data of 4090. Just because the 7900XTX isn’t costing you an arm and leg and has a normal price, doesn’t mean its performance isn’t competitive with the competition. It’s ironic how you attack me on an intellectual level while just being some average emotional dude who isn’t even able to be unbiased and informed. I saw at least 5 reviewers who said that the 7900XTX is probably competitive with the 4090.
I wanna add: you talk complete nonsense, there are no comparisons between AMD and Nvidia while using FSR and Nvidia being native. All these comparisons I’m talking about were made on the premise of no upscaling used. Other than that, we will see. End of story.
"...PROBABLY competitive..." That's the nut. Let's wait for half a dozen sites to do comparisons once hardware is on the street and perhaps have a better informed discussion? I don't implicitly trust ANY company's press releases to make informed decisions. I am HUGELY optimistic for this gen, having owned AMD and Intel CPUs, and AMD and NVIDIA GPUs. Let's see how these play out before we claim who is more informed than whom, since neither you nor I have seen one of AMD's cards independently tested. Thank you sir.
I think that the comments are dependent on a person's perspective. They are not dumb. The rtx 4090 is untouchable to you because it satisfies the qualities you need in a GPU, while others have qualities they are looking for in a GPU which make the 7900XTX superior to the RTX4090. For example someone who wants to try out the displayport 2.1 is the same as the one who wants to try out Ray traced high refresh gaming. They each will choose a different GPU. And whichever way you slice it, the GPU they choose would be superior to the other option based on their preferences.
Yes and even RT, as long as the 7900XTX has enough to use RT with FSR activated in 4K, like 60fps, maybe a bit more, it is enough. RT is a single player Game Feature and you don’t really need high fps in SP games, 60 fps is easily enough and easily fluid enough. By that I mean 60f ps on a decent 144hz display with backlight strobing not some random monitor of course.
So, you buy a 1000$ GPU to run this generation's games at 60fps in 4k with RT. Fair enough. But surely someone as smart as you will understand that coming games will be more demanding, so a GPU that can JUST hit this target now, will fall below that in a year or so?
As long as there is no 80ti 102 Nvidia and AIBs can sell every 4090 they make to government labs and institutional customers at broker market price and some of those customers don't even want the card just GPU to implement in their own designs. 103 is an oddball at 397 mm2 and 104 at 295 is the mass market product. mb
Link to reviews to support such a claim, assuming you mean "competitive" in terms of performance in games, Compute, and Ray Tracing (all of which are part of the 'package')?
Interpolation isn't really anything to brag about, I hope AMD isn't jumping on that shit-train. This is starting to remind me about the old days with worse and worse variants of AA introduced for every new generation and the game.exe and 3dmark.exe id to game perf. I wish Nvidia at least once could focus on improving visual fidelity instead of inventing new ways to sacrifice fidelity for speed.
Upscaling is mainly important for using RT, though there are some games that will have enough fps anyway, if you can manage to play with 60fps no upscaler used. I don’t see the issue, oldschoolers whining about upscalers need to grow up, as long as the quality is decent it doesn’t matter whether it’s native or with upscaling.
According to analysis at tftcentral, 8K is useless on the desktop for gamers. If this card is targeting 8K that implies, but doesn’t guarantee, good 4K performance. That implies less need to switch on image quality reduction features to obtain performance.
Of course, games that use RT on invisible grains of sand under buildings and tesselate meticulously undulating water within solid walls will be too demanding for parts not designed to remove those anchors.
"I wish Nvidia at least once could focus on improving visual fidelity instead of inventing new ways to sacrifice fidelity for speed." interesting point so noted. mb
This was an interesting and fairly completive launch. $899 and $999 at the top end bodes well for well priced 7800 xt and 7700 xt type products.
It's interesting to see that AMD has also made the switch to a "double FP32" architecture the same way NVIDIA did with Ampere. On the other hand, the ray tracing claims are fairly lack luster, and you have to wonder if the performance increase will be a full 50% or not at 1440p. They claim 50% better ray tracing, but that just keeps up with the raster. RDNA 3 needed to be about twice as fast in RT to begin competing with Lovelace.
Overall, it seems AMD will play the same roll in the market as they have this last generation. They are not the out right fastest, FSR 3 will lag behind DLSS 3, ray tracing is a sore spot, but they are priced well and have better efficiency. I'm hoping the can convince me to upgrade from my 3060 ti to something worthwhile for <$499 next year.
You're doing it wrong. RTRT performance isn't the final frame rate. It's the relative performance of RTRT versus raster only. Take a hypothetical game that runs at 100fps without any RTRT effects. Turn on RTRT and it's at 30fps with RDNA 2. 50% better RTRT performance means it will now run at 45fps on the same speed hardware. Increase the original raster performance to 150fps, and the RTRT performance is now 67.5fps instead of the 45fps that it would have been without the uplift.
According to AMD’s OWN numbers, there is a 47-84% performance uplift in RT. Also according to AMD, the flagship is 70% faster. Meaning RT performance is not improved relative to raster, and may actually be slightly worse. This is in comparison to Nvidia that had a RT lead and actually improved it gen over gen. @TallestJon96’s analysis is correct.
AMD's RT results are with FSR- which means it is rendered at lower resolution. But we know, that 6900XT performance dropped of at 4K, while 7900XTX performance should be more consistent there (way higher memory bandwidth)- thus, advantage over 6900XT may be lower at lower (FSR) resolution where 6900XT is not limited. So, we do not really know what the new RT performanceis yet.
*50-70% more Raster not 70%. Same with RT, 50-80%. So I suspect that RT perf is highly dependent on the game if it has a higher percentage of gain compared to raster or just on the same level. In any way, it could be that RT perf is just high enough to be usable in all games, if you can do with upscalers and 60-80fps in 4K. And that is what is important and nothing else.
Regarding the switch to doing two FP32 operations per clock cycle, I am annoyed that in the specifications that AMD has published on their site for the 2 announced cards, there is no mention about which is the throughput ratio between FP64 and FP32 in RDNA 3, i.e. which is the FP64 throughput of the new GPUs.
During a decade, most AMD gaming CPUs had a FP64:FP32 throughput ratio of 1:16, which was better than the 1:32 ratio of the NVIDIA GPUs.
Now there have been rumors that in RDNA 3 this ratio has been lowered to 1:32.
The rumors are plausible, because that is what would happen if in RDNA 3 the FP32 execution units have been doubled, but not also the F64 execution unit.
If that is what they did, they should have announced it. There are some people like myself, for which the FP64:FP32 ratio is important. If they would have kept the old ratio, a Radeon 7900XT GPU would have been twice faster than a Ryzen 9 7950X CPU for a not much higher price.
In that case, I would have bought an AMD GPU. If they have reduced the FP64:FP32 ratio, then a Radeon 7900XT GPU would have only about the same speed as a Ryzen 9 7950X CPU, but at a higher price and a higher power consumption, so it would not be a competitive alternative.
It's AMD on a shoe string. You get a specification, implemented in hardware, and some basic proofs of concept none of which can be called an application, compatibility or performance report subject whole platform proofs because all those are kept secret by their developers. mb
I'm also wondering why they didn't use the 7950 XT name for anything. There have been zero rumors of anything bigger, but they're also being coy about the name of this die.
AMD has the ability to stack infinity cache and use higher clocked memory. Pump the card to 400-500 watts and there is your 7950XTX. I still think they should have done 7950XT, with an XTX refresh or resurrect the 7970 branding. Maybe a 7970 3Ghz edition would be cool.
Otritus got it. There have been rumors that said that AMD can do a "1-hi" stacking of Infinity Cache, allowing a doubling to 192 MB. Then there will be considerably faster GDDR6 memory available within months. They could also bin the GCD and raise clocks. So there is easily room for AMD to create a 4090 Ti competitor.
From what's available via leaks, the launch RDNA 3 cards are not as far up the curve of diminishing returns as the 4080/4090; which could have easily been cheaper to produce, lower wattage cards with similar performance... and fit into regular cases.
I am intrigued here as well because those 4GB of VRAM may be worth $100 to some people. If they are saying 0 VRAM value, then that 11% price gap is like a 5% performance gap at the high end. The clock speed and bandwidth reduction indicate a larger than 11% drop in performance, so I am quite confused at RDNA3 performance scaling.
The $100 difference is a marketing trick that works particularly well when presented with exactly two choices. They know if you're spending $900 on a graphics card, you have an extra hundred bucks laying around.
It prob reflects the yield, if the yield is very good, they want to sell as many XTX as possible and not have to use fully working dies to build the XT.
What would be fun if they release a dual card with two GCDs, it seems they have the tech to do it packagewise, the issue would be the power requirements. Might be they will do that first as an Instinct card, but it's clearly where they are going.
There will not be 2GCD this Gen as it seems they haven’t solved the issue of GCD to GCD communication yet. The issue is it needs vast amounts of bandwidth and good latency to communicate otherwise it will not behave like a single gpu but that is what you want. Maybe next Gen or the Gen after that then.
That’s not a gaming card and these dual gpu boards are handled as 2 GPUs by the system so not what you want for a gamer gpu as Crossfire and SLI are dead and not good enough. Instinct isn’t Radeon.
Both NVIDIA and AMD have set prices for their second best card that provide a much lower performance per dollar than their top card.
I assume that this is a win-win from their point of view. The buyers who want the best efficiency for the money they spend will be lured to buy the most expensive card, while those who cannot afford the greatest price will be forced to buy the cheaper card, which nonetheless is more profitable for the vendor.
The Radeon 7900XT has about the same FP32 TFlop/s value as RTX 4080, but about the same $ per Tflop/s value as the less overpriced RTX 4090.
On the other hand the top model Radeon 7900XTX, has about 77% of the FP32 throughput of RTX4090, but only 62% of the price, so its performance per dollar is about 25% higher than that of RTX 4090.
Obviously, AMD had to offer much better performance per dollar, because otherwise their cards would not have been competitive with those of NVIDIA, which have much better software support.
AMD really is going super conservative here. The entire GCD is just 300mm2, the AD102 is 600mm2. Yes there's Chiplet Cache but still, and all that at 350W. They should have made bigger dies and even clocks are super conservative again vs Nvidia, maybe they just want to keep the cards BOM and overall packaging smaller. The reference card is barely 3 slots lol, it's 2.5 nice move.
Coming to the ILP, also finally good to see AT PR it helps a lot to understand. So yeah ILP is new thing, it screams just like Nvidia's new FP32 throughput calculations too extremely bloated and non comparable to other Archs and their own designs. The most interesting is why AMD put AI and not using them for anything related to their GPU performance. I think AMD might pull a Xilinx move soon, I really hope they do.
Now moving onto performance and Price looks like the 1.7X slide on 6950XT vs 7900XTX is at native res no FSR or other gimmicks that translates to a good hefty jump, Plus given the fact 6900XT is as low as $650 now. Super value cards. Also that perf means AMD will match / lose maybe 10-20% depending on title with 4090 at 60% lower cost and no BS 12 Pin nightmare.
RT is low it barely matches and beats a 3080. So RT is still Nvidia's ground, not a surprise. But the fact that AMD is able to match the Ada at significantly lower power is superb. They should have gone with GDDR6X to improve even more bandwidth to get more performance out. Also on RT it's a garbage tech, introduces huge GPU penalty and performance drop forces to use Upscaling BS, way better with pure Rasterization just look at Crysis 3 with it's superior no-TAA solution and ultimate Tessellation with insane GFX fidelity, it's been 10 years and that game has SLI support too, 3090 SLI = 2X FPS at 4K+ Shame it's all gone for the new garbage Upscaling, TAA and the Frame Interpolation (more below).
The added feature set like AV1 is a super welcome move, also perhaps it will improve the P2P Trackers to finally update their encodes to AV1 from H264. I hope it happens that makes super nice but since nowadays encodes are rare, Scene is already at all time low with no many trackers only Anime may improve but that also have low fansub and top encoders are also thinning down. Anyways it's a good package.
Now the elephant in the room is pricing. $1000 7900XTX vs $1600 4090, RDNA3 rips out Nvidia without a doubt. Not a surprise how Nvidia axed 4080 12GB rip off edition lol it got eaten by AMD and it gets beaten by a 3090 too in VRAM buffer and performance at Rasterization too what a joke.
Finally what is that AMD Fluid Motion technology ? No word on that from AT probably NDA or something. Whatever it is I hope AMD is not going to add fake frames to the game like Nvidia BS DLSS3 and pumps latency along with insane GFX fidelity downgrades, Nvidia tech adds DLSS issues and Frame Insertion artifacts + latency total loss. Really hoping AMD doesn't do it, I wonder if they do and give it to Ampere cards, totally Anti-Climatic move since NV cards already have Reflex (AMD had low latency too). Also AMD's new Hyper-RX more lower latency tech.
Completely agree. The only caveat is that this is a new style of computing (dual-GPU/chiplets), sort of, it is likely we will see some software issues. Now wether these are tiny bugs that aren't important, or if they're bigger annoying ones is anyone's guess.
But in time, they will get better at building these and the software will mature. Then they can go all-in by using better nodes, higher clocks, more cache, larger bus, faster memory, etc etc. Then Nvidia will be in trouble, without having some sort of technological advantage.
It's really similar to how things played out in the CPU space, with Zen1 getting back into the game. With Zen2 (imho) having the lead. And then Zen3 kind-of winning (Intel i-10 and Intel i-11) and losing (Intel i-12). The RDNA-1 cards like 5700XT got back into the race. The RDNA-2 cards like RX6800 was the clear choice (with RT not being important yet). And now RDNA-3 likely winning against the current and previous options, but potentially losing against the new options (RTX 4090Ti, 4090, 4080Ti, 4080).
I bring this up, because it just exposed Nvidia. They UN-launched the "RTX 4080" because how embarrassing it was. The RX 7900XT at the same price completely trashes it. Nvidia thought they could use a 4060Ti-4070Ti card, and ridiculously bump up the price (and name). They did this in the past and got away with it, but this time there's competition. I guess their spies in AMD didn't give them accurate information ; )
All Nvidia cards are overpriced, especially the 4080 with 500$ price hike compared to its predecessor that also had the big chip and not just the semi big chip. Nvidia devolved into a terrible company these days just fixated on money. The new old Intel.
Right. It's a matter of scale. "Fixated on money" taken to extremes could give us...the current NVIDIA situation. A little more thought, and a little more money could have reduced the chances of bendgate.
24-27 Gbps GDDR6 will be available to AMD if they want to make a 7950 XT(X) next year. They don't need to use GDDR6X and they can probably produce a decent response to a 4090 Ti. I assume that doubled Infinity Cache on a 7950 XT could lessen the need for more memory bandwidth anyway.
What is the encoding efficiency of AV1 compared to relatively widespread H.265 these days? I remember torrenting H.265 videos years ago, even if H.264 is still the popular choice. I just hope that AV2, which is being worked on, will be able to keep H.266 from gaining traction, with faster adoption from being built on AV1. Remember that Google's plan was to update to VP10 and future versions every 18 months.
I didn't expect AMD to increase VRAM to 20-24 GB before the rumors popped up recently. A VRAM war between AMD and Nvidia is nice because it will put better hardware for AI image models like Stable Diffusion in the hands of consumers. Maybe one of them will go to 32 GB within the next 1-2 generations.
AV1 is in H.266/VVC's league, but lower in efficiency and not as sharp. At any rate, it's excellent and superior to H.265. The problem with AV1 was its snail-like encoding speed, but libaom has sped up over time and Intel's SVT-AV1, which had speed but deplorable quality, is usable nowadays and comparable in quality to libaom.
As for H.266/VVC, it's currently the king but largely unaccessible. Franhofer's encoder is available but difficult to use, and x266 is nowhere to be seen.
I'd just like to add on to my original comment, owing to a bit of VVC testing I did this week after a long time. The gap between AV1 and VVC is actually smaller than I had thought. Indeed, at least for the low-motion, 1080p clip I tried, one can't really tell the difference, glancing roughly. Only on closer inspection is it evident that VVC is better, but only slightly. This is with Fraunhofer vvcenc 1.5/1.6.1 medium preset; slower does improve quality but speed is terrible. Seems open-source has caught up with the patent crowd. As for x265, on very low bitrates, say < 200 kbps, it collapses, whereas AV1 and VVC hold up remarkably well.
In short, run far away from x265, and go for AV1 if compatibility permits, specifically libaom. H.266/VVC is only millimetres ahead of AV1.
I'm not sure if you will get a memory war, AMD has almost always been more generous with memory, only when they were capped by their own hardware did they lag behind, (4GB cap for the pacific island chip).
I just don't think it's going to stop at 24 GB for consumer cards. And the addition of a 20 GB model could force more Nvidia cards to go above 16 GB. This will have interesting effects in the long run.
"The added feature set like AV1 is a super welcome move"
True. But in practice, what sort of quality will these hardware AV1 encoders yield? Usually, not very good. One might achieve superior compression/quality using software encoding, say, with x265 or SVT-AV1. So, really, in my books, this is more marketing than benefit, though decoding is welcome.
Nowadays, there are an increasing number of encodes in proper AV1 (aomenc, etc.) and their quality is fantastic. Anime, too, works wonderfully because softness is more acceptable there. I would say the problem with such encodes is that they might play on many people's TVs, AV1 support being rare, and Opus. And for some low-end devices, 1080p AV1 really struggles.
Interesting comments on Encoders. I only knew that AV1 is painfully slow, other than that not much. Also I was just assuming it would be good maybe I should have added a few points... Also one more thing is Google and Streaming corporations want to avoid royalties that's why they want to push their OSS Codecs but the Entertainment industry is knee deep in these patents and etc. So I do not think 4K BDs will never use AV1 but rather stick to the HEVC and it's successor only.
Also I think it's just more for those "Streamer audience" that AMD is targeting towards, like NVENC. In the end we just encode using CPUs only an important point which I failed to mention.
Yes, I agree, AI will take over this domain and is already doing so. This past week, I did come across Meta's new "encodec," which beats even Opus, and whose design appears very different. I look forward to the boffins at Hydrogenaudio coming up with a listening test on this one.
With respect to our current video codecs, it certainly seems a struggle each time they are improved. No doubt, AI will open new avenues for increasing compression. Perhaps a lot is to be gained by some sort of global compression, whereas today's encoders work on a small locality.
Solid points, and you're right, especially that the BDs, and future film discs, will never use open-source codecs, and that these hardware encoders are aimed more at the streaming folk.
Lamentably, "fake frames" appears to be the way the industry is going these days. We'll just have to wait for some renaissance in 3D that resets the current nonsense in games and takes fidelity to the next level. Some new paradigm is needed.
Is this the first time we're seeing defective chiplets used as place holders instead of going in the trash? Seems like a cost effective way to balance out a package.
No. In the CPU world, AMD will ship CPUs with defective CCDs. In the GPU world, NVIDIA has been doing it for ages with defective HBM stacks (technically this is logic silicon as opposed to DRAM, but the idea is the same).
Tom's Hardware: AMD Recycles Dual-Chiplet Ryzen 7000's as Ryzen 5 7600X CPUs, Again
"inquisitive users found both Ryzen 7 5800X and Ryzen 5 5600X CPUs were rocking two CCDs, but just one was being used and was necessary.
The approach isn't a secret, or new. AMD was fully transparent about this aspect of its production strategy when Tom's Hardware's deputy managing editor asked about it last year."
Most manufacturers making interposers sell some products with defective chiplets. While you can test the chiplet by itself to ensure functionality, there is a non-negligible failure rate in bonding the chiplets to the interposer. That means you get some interposers with bad connections to chiplets. Then you have to either discard the entire interposer or sell products with bad chiplets.
So that's R20k for the 7900XTX and R18k for the 7900XT.
Geez. That's 3080 money (in South Africa) for Lord knows how much more performance. Wow.
Also, the dual SIMD might turn this into a compute MONSTER for cryptographic hashing and distributed computing like the BOINC or dnetc research projects. <3
6950 XT was 335 Watts, not 355. So the XTX uses a little bit more power. The 6900 XT indeed was 300 Watts. There could be a lot of room to overclock these.
My bet is doubled Infinity Cache with 3D stacking, faster GDDR6, and higher clocks. They will save complicated multiple-GCD configurations for RDNA 4 in 2024.
No graphics card for me then… Would it have killed them to have PCIe 5.0 with 8 lanes or even 4 lanes since PCIe 3.0 with 16 lanes still performs 98% as well as PCIe 4.0 with 16 lanes?
Presumably you think that there will be another PCIe 4.0 x4 GPU this generation like the 6500 XT. Maybe they don't even release something for that segment and just sell cheaper RDNA 2 GPUs like the RX 6600 to fill the low-end. They can even continue to manufacture them if they want to.
<blockquote>Finally on the subject of AMD’s GPU uncore, while not explicitly called out in AMD’s presentation, it’s worth noting that AMD has not updated their PCIe controller. So RDNA 3 still maxes out at PCIe 4.0 speeds, with Big Navi 3x offering the usual 16 lanes. This means that even though AMD’s latest Ryzen platform supports PCIe 5.0 for graphics (and other PCIe cards), their video cards won’t be reciprocating in this generation. In fact, this means that no one will have a PCIe 5.0 consumer video card.</blockquote>
On other news sites though, they mentioned that the official specifications mention PCIe 5.0 with 16 lanes. I guess we’ll have to wait a bit for definite answers on this.
TechPowerUp says 4.0. Guru3D says we should assume 4.0 because 5.0 was not mentioned in the presentation. Wikipedia says 5.0 but the article is not even done being written and doesn't source it. Wccftech says 5.0. AMD's website says nothing about PCIe on the press release or the product pages.
But back to your original point. These are 16-lane cards, so no problem there. What you have a problem with is some hypothetical PCIe 4.0 x4 7500 XT, right?
A value priced GPU like the 6500/7500 will be not be put into a high-end motherboard making the discussion of PCIe 5.0 support a moot point since mid-range and value motherboards do not support PCIe 5.0.
To clarify, there was a Q&A session immediately following the recording of the presentation. It was there where someone (not me) asked about PCIe support, and we were told PCIe 4.0. So that is the source of those claims.
@The Von Matrices: Main issue is that PCIe 5.0 lanes are limited on consumer motherboards so every device running at half or a quarter of the maximum speed is wasting half or three quarters of the lanes it’s taking up. Benchmarks on other sites show the Nvidia 4090 to be performing 98% as well on PCIe 3.0 x16 and 92% as well on PCIe 2.0 x16. Assuming the performance penalties translate over to Radeon, if were PCIe 5.0 x8 then it’d allow us to run it with the equivalent performance of PCIe 4.0 x16 with 0% performance penalty. Those with PCIe 4.0 boards would incur a mere 2% performance penalty with a hypothetical PCIe 5.0 x8 card. But because the card is actually PCIe 4.0, running it at x8 would incur a 2% penalty. And those of us who really need to maximize PCIe lane utilization might have to boot the graphics card to a x4 slot, incurring an 8% performance penalty.
" Of particular note, both cards will feature, for the first time for an AMD consumer card, a USB-C port for display outputs."
All reference Navi 21 cards had a USB-C port with the same alt-mode. It should quite convenient for future VR headsets, assuming it passes USB3 and USB-PD as well (it should, since it's part of the spec).
Future VR headsets should be standalone, or use WiGig 2 (802.11ay) or better to connect to a nearby desktop. Connecting cables to a VR headset is caveman stuff, with the possible exception of a battery extension that you can secure to your body.
It depends on what you mean by "future VR headsets".
Future VR headsets on 2023 to early 2025? Probably not, as most of them will be wired anyways. WiGig 2 probably eats a lot of power so it requires large batteries, making the headsets a bit heavy, and realistically most VR experiences are seated anyway. Starting with PSVR2 which could be the most popular headset for its time like PSVR1 was in 2016-2018.
Future VR headsets in 2025-2028? You could be right.
Future VR headsets 8+ years from now? Probably not. At that time we're probably going to be using AR headsets that double as VR on demand.
Regardless, GPU makers develop and market their products for a 2-3 years lifecycle, so having an USB-C output for wired VR headsets makes sense IMO. Just like Sony did it on the PS5.
"This is a notable change because AMD developed RDNA (1) in part to get away from a reliance on ILP, which was identified as a weakness of GCN"
What? That is so wrong. When AMD moved to GCN, that's when they stopped trying to extract ILP. GCN does not do ILP in wavefronts at all, each lane in a SIMD is single issue, while it was the prior arch before GCN (VLIW) that was all about extracting ILP.
Yes, the issue of GCN was it had 64 issue waves, RDNA solved this by having 32 instead which is more flexible and thus higher utilization of the engine and better perf/W.
Doing a little searching it seems that RDNA already increased reliance on ILP over GCN in exchange for being able to keep the compute units fed with fewer threads.
Isn’t needed when you can use G6 with 20Gbit instead which is competitive with G6X. Samsung has already talked about G6 with 22 and 24 Gbit as well, so G6X isn’t that relevant. It’s prototype tech that will probably be superseded by GDDR7 sometime in the future when G6 isn’t cutting it anymore.
A thousand dollars for a GPU... I'm still over here unwilling to pay more than $200...
I'd be pretty upset about this if there had been any good games released recently. I guess it's good the gaming industry is dead because gpu's are entirely unaffordable.
I will be interested to see how these perform on double precision which is advertised to churn at 1/16 the s.p. performance (whereas NV is again at 1/64 with this gen)
I guess we'll see how these turn out. Ray-tracing is not an optional feature at this point, so at these price points AMD better at least be hitting 3090 levels of performance with maxed RT settings or else buyers are going to be stuck with tech that's outdated upon release. And hopefully FSR 3.0 is a better competitor for DLSS 2 from a visual standpoint than FSR 2.0 is.
There were good reasons why RDNA 2 was so unpopular in the desktop PC space. I'm really hoping AMD is more competitive this generation, for the sake of people buying Nvidia as well.
I don’t see your negativity. FSR 2.1 is easily good enough to compete against DLSS. RT will be 50-70% better with new Radeons, should be enough, of course not on the level of 4090 though, they are still one Gen ahead and it seems they invest more.
I'm glad you don't see negativity, as there wasn't any. This is a tech site, so we try to be honest about where hardware stands, and AMD hasn't truly been competitive in the high-end GPU space for many years now.
And no, FSR is not easily able to compete against DLSS, unless you're strictly referring to performance. Nobody who has done a deep dive into the image quality would say that FSR is up to par with DLSS, which is to be expected as current AMD cards don't have the hardware to drive more advanced image reconstruction features. It's still better than DLSS 1.0 by a significant margin, but nobody who had the option to choose one or the other would choose FSR over DLSS 2.x.
Again, I hope AMD really pulls through this time and justifies those price points without the caveats that they can't compete with modern Nvidia when it comes to common and widely used rendering methods. Rasterization alone is not good enough. Ray-tracing matters, image reconstruction matters, and other software features matter as well when it comes to justifying the prices of GPUs.
Hardware Unboxed did a pretty thorough analysis or DLSS 1, DLSS2, FSR 1 and FSR 2. The findings were that AMD although late to the game is pretty competitive on the software front. And they'll just keep getting better and more dominant wih the support from Microsoft (Xbox) and many third-party developers (PlayStation) since the consoles rely on their technology.
I'm not a fan of Khanan, but he is correct on this one.
Also, AMD has been competing in the high end, you just haven't noticed. It's not them it's you. Like the fast RX 6900XT and RX 6950XT. Basically the whole RDNA-2 product line was more impressive to me. Compared to the big, hot, and thirsty RTX 3000 cards, which not only skimped on RAM but charged a premium. Then there was the likes of Vega VII and Vega64 which had poor launches compared to the GTX 1080Ti and GTX 1080. But years later, we actually see AMD cards take the lead since they had the superior hardware and their software has only began to catch-up. Same thing happened to the R9 290X versus the GTX 980, or the HD 7970 versus the GTX 680. Even their other cards like the RX 480 and RX 5700x have aged better than their competitors like the GTX 1060 "Ti" or the RTX 2070-Super.
That's a pretty strange take, really. Hardware Unboxed did not do an in-depth look at image reconstruction. Motion handling is a key metric of image reconstruction and they really did the weakest job of assessing common weak points, which is strange as I generally expect better from them. The lack of dedicated hardware for reconstruction is just not something that can be fully overcome no matter how much work is put into the software.
Also, why would you think the RDNA2 architecture was more impressive than the 3xxx series? Serious question, as it had last gen hardware features compared to the competition and lagged significantly with software as well. The ram difference also was not an issue outside of AMD-sponsored titles.
Old AMD cards aging better than Nvidia also isn't a great point, but, yes, AMD hardware has tended to increase in performance after release due to rougher drivers. That's not really a great marketing angle, but it's not as much of a thing now that AMD's drivers are finally getting closer to par.
Again, I really want AMD to perform well and be a genuine competitor at the top of the charts (which even AMD has admitted they're not doing right now). It's a tough battle for them as their R&D budgets have been so much lower than Nvidia's on the GPU front, but I'm really hoping that has been changing with the success of Ryzen and the sales they've been making on the console side. I've historically bounced back and forth between companies up to the 290 which was my last AMD card, and would love to once again have more than one option for cutting edge graphics tech each generation.
Not so much the RDNA-2 architecture, which was good, but moreso the lineup of cards.
I just found them to offer better performance at lower prices, compared to Nvidia's RTX-3000 series. Obviously they didn't have DLSS 2.1, RT performance, NVENC, and other advantages. And to top it off, Nvidia was using a lesser node, which just speaks to the strengths of their architecture.
I still don't think RT is important for gamers just yet, but the next generation it likely would be more mainstream and perhaps have a larger visual impact.
Hardware Unboxed didn't have specialised tools to decode the software and analyse from the backend. What they did was to pixel peep, and judge the final result. Which I deem acceptable, but we can disagree on this point. Perhaps you have another review that has done a better examination?
I'm not sure exactly how the future will pan out. From my understanding, chiplet design means more complexity (up front), slightly lower performance, and slightly higher power draw. So we might see AMD become aggressive in the dGPU market but Nvidia may always hold the crown at the top performance, and hold the efficiency lead in laptops. AMD meanwhile could hold most of the market share in the Desktop PC space, simply by offering faster and cheaper cards in the (RTX xx80 - xx60) middle.
They AMD finally have some breathing room for R&D thanks to multiple revenue streams: shareholders, servers, consoles, CPUs, GPUs. And each are competitive in their fields and are generating profits which cycle back into the business.
If I have ever seen a delusional Nvidia fan, it’s you. You’re easily talking down all upsides of AMD while talking up all weak sides of Nvidia. Laughable that you’re pretending to be neutral. I’m using Nvidia since many many years now btw, and still not a toxic fan like you are. I’m the paragon of neutrality, so I know that you aren’t anywhere near the mind space I am in. You’re not neutral you’re a hard Nvidia fan. I’m the opposite, I see all down and high sides of either company and of all GPUs, despite not owning a Radeon since 2014.
AMD fans need to get this through their heads. It's not a good thing that the likes of the vega 64 are at their apex today.
Are they selling them today? Nope. So it doesnt matter.
Taking 6 years to optimize your hardware is atrocious, by the time it matters your hardware is obsolete, everyone has moved onto newer generations to buy. This is part of why nvidia sells so well, its better to have 100% performance today then to have 100% performance 6 years from now.
Not about optimizing, software needed to catch up to the forward looking architecture, because it only runs well in DX12 games. The only thing they really optimized aside from small percentage gains and new games, was DX11 performance and that was mainly for newer Radeon cards.
To make it more clear, Vega and Fury X have a heck ton of shaders and they can only be fully utilized in low api games, DX11 maybe since the newest drivers and if you increase GPU util with a very high res. Has not that much to do with fine wine, unless you count architecture aging better into it, but finewine was about software not hardware aging.
Yes, AMD is actually good since RX 6000 Gen, just not if you’re a RT Fan, at least not for 4K. 7000 Gen aims to fix that as well. FSR gets better and better too, they have just released version 2.2 and working on FSR 3.
The interesting thing is, I would’ve loved to have FSR 2.1 for my GTX 1080 Ti back then end of 2020, and it was just integrated into CP2077 now and I had just not enough performance for that game back then. People forget that it profits a lot of users not just Radeon.
And again you’re talking complete nonsense. Seems you’re a huge Nvidia fan that pretends to be neutral, while he absolutely is not. 6900 and 6950XT were easily competitive with Nvidias 3090s and even faster at lower resolution which a heck load of people care about, more than 4K. It’s only ray tracing like I said, that keeps Nvidia ahead. 4090 is a bit faster than 7900XTX in raster but it’s such a low amount that doesn’t really matter as both cards are easily fast enough for 4K high fps. It’s just ray tracing again, where Nvidia is ahead.
And FSR 2.1 is easily competitive with DLSS as it has good enough quality, unless you’re a toxic Nvidia fan who can’t accept the reality of DLSS just not being that special.
Article referred to a 4090 Ti… there’s no such card yet.
I had read somewhere that amd confirmed RDNA 3 would have the same encode/decode engine but that seems not to be the case… will be interested to see if 10 bit 4:2:2 decode for h264/6 is included. Could be a game changer for many content creators…
7900XT_ N 31 at $999 and $899 I'm waiting to see the real price which I anticipate + 20% in the channel. 6900XT and 6800XT were offered at $999 and $640 which was low out the gate. Both minimally valued by the channel on a sustainable margin, the real price on application perfect pricing that is not an artificially low PR price by + 41% and 62% respectively. I anticipate N 31 priced at minimally AD FE MSRP if you can find one at that price. mb
We don't have a GPU crisis anymore. But the XTX at just $100 more than the XT does seem hard to believe. I think we'll see AIB models go crazy with power and clocks to justify higher prices. AMD has sandbagged the performance of the reference models but mentioned that the 7900 XT_ is designed to scale up to 3 GHz. Which is... 20% higher.
I suspect the AMD models to be near the msrp on the AMD shop if you can’t buy them in regular shops for that price with a PowerColor sticker on them. But that’s normal clocks then and not an extra huge and silent cooler. You pay more you get more.
I recall that GCN was actually a TLP design, not ILP design like Evergreen and the Terascale architecture in previous generations. I think the issue with GCN was that the hardware schedule wasn't able to scale as well when going beyond 11 CUs due to internal architectural bottlenecks with instruction issue, internal bandwdith and the 4 shader per SE.
RDNA 1 and 2 are convincingly TLP designs and moving away to a ILP design would require a major redesign of the compiler and the ISA language (RDNA can run under Wave64 used on GCN), so I doubt they would risk such a risky move as it means that their design would require some optimizations which would be from scratch. (Unless if this design keeps the Wave32 approach, or goes into Wave64 approach like GCN).
The issue was that GCN forced Wave64 while RDNA and successors can do 32, aside from other optimizations and a more balanced architecture with more ROPs. GCN was simply not optimized for gaming while RDNA is.
Wave64 would leave too much bubbles but each SIMD was 64 instructions wide, couldn't take 32 like RDNA. GCN also had a terrible geometry bottleneck which only got fixed with Vega somewhat lol.
Love to mate the XTX with my 7950X, currently sharing space with an Nvidia RTX A4000. However, I need vMix, Davinci Resolve Studio and OBS to properly leverage the encoders in the XTX, at launch. My life is high end corporate streaming and video editing, and a 4080 currently flies on all of my use cases. An extra $500 on a 4080, saves me $1000-2000/month in saved time rendering out 4k and greater footage.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
197 Comments
Back to Article
Threska - Thursday, November 3, 2022 - link
The prices for both aren't out of line for what one gets.Kangal - Thursday, November 3, 2022 - link
I don't know, AMD is more likely to give consumers a fair price or discount. And with the launch of Nvidia's cards already, they know what they're competing against both price-wise and performance.I feel like the RX-7000 is going to be great, but it's not going to have as much uplift as the RTX-4000 series.
But we don't know everything, especially with the cryptomining mostly behind us. Who wins is a different answer depending on your perspective.
Khanan - Friday, November 4, 2022 - link
He meant both of the AMD GPUs not Nvidias. Lol. No Nvidias prices are insane and laughable compared to this. I bet this will destroy the 4080 and that for 200$ less. Nvidia should quickly adjust pricing on all their cards, and fix their broken power connectors that are just a train wreck. Even cables burn, no adapter used.haukionkannel - Saturday, November 5, 2022 - link
Nvidia don´t have to reduce prices, because their GPU sell well enough even if they are much more expensive!I am sure that Nvidia will release 4080ti that is about the same speed as 7900XTX and cost $200-$300 more (Nvidia tax) and reduce 4080 price to the same as 7900XTX and 4080 will sell more than 7900XTX even it is slower, because it is Nvidia...
"Buy the real Nvidia at the same price as the imposter!" "The more you buy the more you save!"
Nvidia marketing will win also in this time, just look the sell numbers.
But it is good to the rest of us that AMD did make reasonable priced GPUs so we don´t have to buy Nvidia gpus.
Khanan - Saturday, November 5, 2022 - link
That’s your opinion and nothing more. The 4080 is certainly too expensive if it’s slower than 7900XTX and XT, so no, I don’t agree. Many people will share my opinion. Only the 4090 as a halo product can be overpriced, the rest can not.Skiddywinks - Sunday, November 6, 2022 - link
The thing is the numbers play out this way. Whether you agree or disagree is irrelevant; nVidia sell more cards even when they don't make as much sense.This time it does seem like a more dramatic value advantage than in last year's, so clearly AMD are hoping to really start their "Zen moment" for RDNA, and I think they will definitely sway more of the market than usual. But at the end of the day, nVidia have a lot of (in my opinion undeserved) mindshare. That's hard to compete with.
meacupla - Monday, November 7, 2022 - link
AMD is getting their Zen moment for RDNA3 right now.The performance is there, the drivers are mature, and the chiplet design is significantly cheaper to produce than what nvidia is offering. There is an added benefit of AMD being power efficient, which is looking like it will be a factor in some areas.
nvidia likely cannot lower prices all that much, if they keep using monolithic dies with the latest and greatest process node from TSMC. nvidia doesn't have a chiplet design for at least 2 generations of cards, and this is likely going to tank them.
Threska - Monday, November 7, 2022 - link
The enterprise market hasn't disappeared, and currently they dominate there. Plus all the things you mentioned don't matter as much in those spaces. So no I don't think it will "tank them" any more than AMDs mistakes back in the day "tanked them".Khanan - Thursday, November 10, 2022 - link
AMDs Zen Moment was already with RX 6000, as it brought parity to Nvidia. To be precise, it was like their Zen 1 moment. A Zen 3 Moment still didn’t happen, as AMDs still not faster. RX 6000/7000 are at best comparable to Zen1, Zen 2 was already better than Intel in many regards as it had way more cores, just not in purely gaming but fast enough. That is not happening here. If anything, Radeon is like Zen 2 gaming wise while Instinct is very competitive to Nvidia or faster, so all in all it’s not comparable to the CPU space. For me a true Zen moment would be if they can surpass Nvidia, didn’t really happen unless you only care about 1080/1440p gaming.Bruzzone - Tuesday, December 13, 2022 - link
So on 7900XT_ GCD to 6 x 16 MB SRAM latency issue AMD adds HBM lanes to the commercial memory controller option supporting Dual [x2 GCD] on interposer? mblilkwarrior - Friday, December 9, 2022 - link
…Chiplet design is an implementation detail that doesn't matter to end users. If Nvidia's performance is still outstanding with GPUs in a class of its own, Nvidia will continue to have a healthy business prioritizing enterprise-level, professionals, and prosumers being a premium GPU company that provides premium GPUs at all GPU segments of their business.Those audiences are a sure thing during a recession compared to price-conscious consumers. For that reason, it is no surprise Nvidia sold out of the 4090s they were willing to make to be happy with the ROI at the price they asked for them.
Price-conscious everyday consumers, especially around Black Friday, are not reliable audiences to ship things to at the end of the year to get a predictable return. Accordingly, AMD and Nvidia prefer to ship high-end GPUs to begin a GPU generation to get predictable return shipping cards specifically for them. Hasty average consumers buying such cards well beyond their usual purchasing power is merely a nice bonus.
Bruzzone - Tuesday, December 13, 2022 - link
AD at 67 M units which is the minimum NV volume over 20 months where the first unit is priced at cost decreases 1/67 millionth every unit until the last unit produced is sold price at cost. mbKhanan - Thursday, November 10, 2022 - link
As long as sheep are there, that are mindless and buy everything, yes. But sheep aren’t shopping for 1200++ cards so you are wrong. 4080 might not be such a huge seller, it’s overpriced as hell. 500$ extra over last Gen and a smaller chip not the biggest anymore.haukionkannel - Friday, November 11, 2022 - link
4080 don´t have to be big seller! Nvidia can sel 4070 and 4060 to those who only have $700-900 for GPU and they can sell 4050 for $500 for those to whom $700 is to much...and if $500 is too much they can always sell 1060 or 1030 for $300...
They don´t care what model customer buy, as long as it is Nvidia GPU. They make big profit with each of them and as it has been said. People buy those even at high price more than they buy AMD GPUs.
It is just a fact that you can see by looking any statistics...
Khanan - Sunday, November 13, 2022 - link
We will see what will happen, you don’t have a crystal ball. AMD priced 7000 series low with higher performance than RTX 4080 to disrupt Nvidia. And many people have already praised that decision, you’re not that well informed, bro.imaheadcase - Saturday, November 12, 2022 - link
I like how you call people sheep that have money. That is like someone keying a Tesla cause just mad.Khanan - Sunday, November 13, 2022 - link
Not really my problem if you’re too low iq to understand my point. My point was never about money, my point was that people with more money usually also have more brains and will easily see that RX 7900 has more performance and often rather buy that. But we will see, no use arguing with fanboys.catavalon21 - Saturday, November 19, 2022 - link
"...my point was that people with more money usually also have more brains ..."HAHAHAHAHAHAHAHAHA
ayushmanscs - Monday, November 14, 2022 - link
Ayushman Skin & Cosmetology foresaw this need to be catered by a team of experts who have experience in plastic and cosmetic procedures as well as invasive surgeries.Thud2 - Monday, November 21, 2022 - link
"Whether you agree or disagree is irrelevant"To you obviously but but others may value discussion.
Ithaqua - Monday, November 7, 2022 - link
Actually none of NVidia's products are overpriced as long as they can sell out of them. They're just overpriced for you.Not saying I'd pay that NV Tax, but it's their job to maximize shareholder returns and if they can sell out at $1500, why sell for $1100?
hecksagon - Wednesday, November 9, 2022 - link
Allocations are based on presumed demand. Nvidia tried to drop some of their TSMC allocations because they new demand wouldn't be as high on the 4000 series. The goal appeared to be to introduce some artificial scarcity in place of the cypto scarcity. This would allow them to price the cards high and get some really good margins. TSMC said now, Nvidia still priced their cards high, and now we are in a situation where they very well may not sell out.haukionkannel - Friday, November 11, 2022 - link
Nvidia did move game GPU production to enterprise GPU production, so they did actually reduce the amount of their gaming GPUs. TSMC does not care what Nvidia produce as long as their allocation remains in deal.Bruzzone - Tuesday, December 13, 2022 - link
4080 12 was priced $134 to $170ish and maybe even $230 to high at unit 1 of production. mbKhanan - Thursday, November 10, 2022 - link
Actually you’re wrong. You’re just extremely ignorant and living on your own small moon in a lonely part of the universe. Everyone and their dog says that the 4080 is extremely overpriced. Wake up.lilkwarrior - Monday, November 14, 2022 - link
Maybe everyone and their dog around YOU think that way. Nvidia and AMD run in a global market. Neither card is expensive for the demands of North America based on what people not just cryptominers were willing to pay.Nvidia stock is what they're willing to make; if they sell out every GPU they're willing to make at their current prices, they're happy unless their stakeholders pressure them to sell more than that.
Qasar - Monday, November 14, 2022 - link
" Maybe everyone and their dog around YOU think that way. "nope, same around here where i am as well. 4090 starts at $2220 and tops out at $2780 the 4080s are more then likely going to be at best $500 less then that here to at the entry level. and probably over lap the entry level 4090's. over all rtx 40 series is overpriced across the board so far here. add to that the melting power connector, and the interest for a 40 series rtx, has dropped quite a lot, and most are now waiting till December to see what the radeon 7900 series brings
Bruzzone - Tuesday, December 13, 2022 - link
Not at q1 risk volume production. mbBruzzone - Tuesday, December 13, 2022 - link
4080 16 at $1199 is priced appropriately on q1 risk production before volume ramp marginal cost decline to peak production volume at which time the card can be sold for $945 into run down. The reason 4080 16 seems overpriced at q1 risk production volume is because 4090 FE for $1599 was at that time $300 underwater and anyone who got one for $1599 took Nvidia for $300. AIB dGPU price through the broker market is the only real price on segment by segment, on supply and demand application competitive perfect price, on ROI, for the application workload. mbBruzzone - Tuesday, December 13, 2022 - link
right NV will introduce 80ti AD 102 on top of 80 16 AD 103 the odd ball die size that will then be dumped on 7900XT_ mbKangal - Thursday, November 3, 2022 - link
This makes no sense.They control the market and the pricing. Outside of pressure from competitors; we can only make suggestions to them by voting with our wallets.
All I know is that AMD has gone from their dark days (bankruptcy) to their (highly valuable) bright future. It's all because they offer products that are valuable.
Remember the era with Bulldozer vs Intel Core-i, from Gen 1-Gen 6 it was a massacre. Felt like they completely relied on their "ATi" division at the time for GCN 1-3. Then they were left behind by the early Nvidia-900 to the late Nvidia-1600 series, during which time they had their "Zen" moment. It felt like RDNA was the same Zen moment for their GPU division, and this is their Zen3 moment. History rhymes.
Samus - Friday, November 4, 2022 - link
It is ironic how everyone freaked when AMD bought ATi, only for ATi to be the entity that kept the lights on. There was nearly a solid decade of undesirable AMD CPU's, essentially from the Phenom\Athlon X4 up to the launch of the first Zen parts...Intel just killed them after the Netburst era.All that said, I think this incredibly innovative card, regardless of its performance, takes a hardline on nVidia's statement about 'GPU's will continue to cost more every generation' by REDUCING the launch prices gen-over-gen. That's a very important PR move for AMD, and historically this works very favorably in the gaming industry, a tactic Sony has used to trump Sega and Microsoft in overall sales.
StevoLincolnite - Friday, November 4, 2022 - link
There were a few moments during that period where AMD genuinely had some good hardware on the CPU front.The Phenom 2 x4 965, x6 1090T were amazing chips on their release.
The Athlon 2 x4 640 was also a fantastic budget quad-core CPU at the time too.
Then you had the triple core CPU's which were decent chips in their own right, but you could also unlock them into quad-cores.
Once bulldozer arrived, the FX 8120 was a decent part that overclocked extremely well.
Sure, they couldn't beat Intel, but they were good parts that lasted.
Threska - Friday, November 4, 2022 - link
Still have most of mine and for those into virtualization and ECC, AMD was less of a headache than Intel and their "market differentiation" aka milk the consumer.StevoLincolnite - Friday, November 4, 2022 - link
I had the Phenom 2 x6 1090T which happily sat at 4ghz, but it wasn't until you overclock the NB and HT Link and pushed the Ram up that these chips started to breathe.Eventually I went the Core i7 3930K as I wanted more, but that chip is still humming along fine in my grandmothers machine.
Samus - Tuesday, November 8, 2022 - link
I'll definitely give you that. It's ridiculous how Intel nerf's their CPU's and chipsets out of ECC support...to this day.Oxford Guy - Sunday, November 6, 2022 - link
‘Once bulldozer arrived, the FX 8120 was a decent part that overclocked extremely well. Sure, they couldn't beat Intel, but they were good parts that lasted.’
I don’t know what market AMD was targeting with Bulldozer. Piledriver used the same number of transistors as Sandy Bridge E and for what?
Everything about the design was substandard. AMD even broke AVX with the Piledriver update.
AMD not only couldn’t beat Intel in performance, it couldn’t come close in efficiency.
It seems the only market the BD parts worked well enough in was the GPU-centric super computer, where the inefficiencies of the BD design could be more masked.
GeoffreyA - Sunday, November 6, 2022 - link
Bulldozer was disastrous. But it was, and is, one of the most innovative, original CPUs made. AMD, stunned how the Core microarchitecture was thrashing them, tried to come up with something completely different. Unfortunately, that doesn't usually work in this field, the principles of how a good CPU is made having already been laid down. Bulldozer was ambitious though; it aimed to bring them back, but failed to deliver. (At least on the single-threading front.) Yet, through its four iterations, AMD improved the design, raising IPC and dropping power consumption. Seemingly, it taught them a lot, and Zen benefitted. Compare the efficiency of Zen 3 to ADL or Zen 4 to RPL.Also, they were stuck with it, CPUs taking time to be designed; and according to Keller, there was a great deal of rubbish going on behind the scenes, as well as a despondence that they couldn't aim higher.
"AMD not only couldn’t beat Intel in performance, it couldn’t come close in efficiency."
That's true. But what happened to those efficient CPUs today? The tables have been turned, and paradoxically, Bulldozer partly created Zen.
Bruzzone - Tuesday, December 13, 2022 - link
Notorious or a notoriety legal settlement hard to say. mbSamus - Tuesday, November 8, 2022 - link
The problem AMD had with Bulldozer was a lack of market position to get people to adopt it's design into their software. It couldn't hold a candle to comparably priced Intel chips at the time, even older Core 2 Quad's, that stuff was optimized for.The sad irony is AMD's greatest gift to x86, 64-bit memory allocation, was so ahead of its time that they were way behind when it went mainstream with Windows 7, the first OS to have initial adoption at 64-bit over 32-bit.
Bruzzone - Tuesday, December 13, 2022 - link
Construction era A9 Acer client here great graphics big screen client but no faster and slightly slower than my 2007 Core 2 Duo mobile give away sludge to HP by Intel in sales package because it was such a computation dog and this isn't much better, yes typing right here (avoid large Excell dB) but ok as a client device and it has its own storage. mbTheinsanegamerN - Friday, November 4, 2022 - link
It's incredible that people credit ATi for keeping the lights on at AMD, forgetting that the multi billion dollar boat anchor that was ATi was the reason AMD struggled to release new CPUs for the next decade....schujj07 - Friday, November 4, 2022 - link
It takes years to design a CPU. Once AMD started down the CMT path they were stuck there for quite a while. For a long time it was the GPU business that kept AMD afloat.Yojimbo - Saturday, November 5, 2022 - link
Any debt AMD took on to buy ATI was AMD's fault not ati's. AMD made mistakes with their CPU architecture and their fusion project. I don't know what input the ATI people had which swayed or didn't sway AMD one way or another on fusion, but i think creating fusion was AMDs whole reason for buying ATI, So AMD are primarily responsible for it and for the compromise of the GPU architecture that resulted from it. But I think the biggest contributor to AMD's misfortunes was their failure with their fabs. Even after they sold them and got cash for them they were still pinned down by the wafer supply agreements they signed as part of the deal. It took them years to finally extricate themselves from their fabs failure. But the offloading of the fabs was probably the most important part of AMD's turnaround. Otherwise they never would have had the money to dig themselves, especially their fabs, out of the hole they found themselves in.Samus - Tuesday, November 8, 2022 - link
The only boat anchor was GloFo and AMD's CPU business. ATI dominated consoles after the Gamecube and has ever since. The integrated hardware division of AMD (specifically GPU's) had a higher gross profit than every other sector for YEARS.So yeah, ATI kept the fucking lights on.
Kangal - Wednesday, November 9, 2022 - link
This.The worst part about all of this, is it was completely avoidable. Firstly, the immoral and illegal practices that Intel did by having their customers (partners) buy their processors and effectively block AMD in the laptop-space and desktop-space. Well it hurt AMD's reputation, plunged their revenue, and negatively affected their R&D capabilities. We're talking billions at a minimum and in a crucial period. Had that not occurred, perhaps AMD could have either saved GlobalFoundries or have released them years earlier (fabless semico designer).
Secondly, it's about Microsoft. AMD's designs also suffered due to software bugs and optimisation issues in WindowsXp and Windows Vista. Not just day to day stuff, but professional applications and games too. The sad part was, Microsoft acquired most of the workers and business from Sega, when they were planning a Windows Gaming Console. They initially designed the original Xbox with AMD. Yes, they had full working prototypes and everything, that's even what they showcased on-stage. But the ego within Bill Gates couldn't be quenched, and he cancelled the deal, and signed on with Intel and Nvidia simply because they had better "street cred". Had the original Xbox gone mainstream with the AMD hardware, it would've been slightly weaker but much much cheaper for both the Consumers and Microsoft as well. It would have slingshot AMD's revenue back to the black, their reputation would have increased, and software for AMD would have been much more optimised.
In fact, Microsoft couldn't build the Xbox 360 afterwards because both Nvidia and Intel had raised their prices by several fold. AMD was bankrupt at the moment and could not bid on the tender. Microsoft was left to abandon the Xbox 360 project. They really wanted to stick to their x86 design and push to all the large development studios their Direct3D API. Luckily they were able to make a successor by using a new processor designed by Sony that they were able to "borrow" due to a license loophole from IBM, but it meant building a new unique Operating System, and having to resort to emulation for backwards compatibility (instead of native/seemless support).
In a better timeline, Microsoft would've championed the 1-core Athlon 64bit in their console, AMD chipsets would have gotten much more optimised. Microsoft would be stop Intel from blocking AMD chips being sold to OEMs. With new R&D, AMDs next processor, The Phenom 1, would have had more cache and more bandwidth, putting it ahead against the first Intel Core. The Xbox 360 would have adopted this, a dualcore x86 console, with seamless backwards compatibility, and a large revenue source. Not to mention AMDs support of Linux. AMD could have sold GlobalFoundries and bought ATi rather at the same time. They could have shifted their low-priority 90nm-45nm products there (Motherboards, low-end CPU, low-end GPU) and ordered their high-priority 28nm-20nm products (high-end CPU, high-end GPU) from TSMC instead in the late 00s. They would have been competitive, and we may have even seen Zen1 come to the market way earlier, like in 2012-2014 instead of 2017-2018. Five years and Billions of dollars is nothing to sneeze at. It would have meant that the "Xbox 720" would have happened sooner as well, all of these positive loops, making AMD a better product and company overall. That in turn means a much healthier competitive market, instead of the three Microsoft+Intel+Nvidia giants. We may have AMD partnering with Valve, and see the rise of the Steam Deck years ahead.
GeoffreyA - Thursday, November 10, 2022 - link
Excellent comment!catavalon21 - Saturday, November 19, 2022 - link
+1Bruzzone - Tuesday, December 13, 2022 - link
Yea Intel was bad at 15 USC 1, 2 and 18 USC 1962(c) and 18 USC 1956 where MS is explicitlytied to Intel contractually it takes two or more signatures and horizontally in combination through 2001 on Intel Inside tied charge back tying Wintel to the channel and to media sales preview for kickback known as tied registered metering. mb
Bruzzone - Tuesday, December 13, 2022 - link
agreed should have bought S3. mbKhanan - Friday, November 4, 2022 - link
Not quite. Phenom II was pretty decent and I used it for a good while. Bulldozer and later were trash aside from the APUs that were fine. So you’re wrong on multiple accounts.CoachAub - Friday, November 4, 2022 - link
I must agree with Khanan. Phenom II took the lead for about a month until Intel's next gen took over again. That was a solid series of chips. Sadly, the original phenom launch was disappointing much like the bulldozers. Had they launched Phenom II instead of Phenom I, they would have regained ground they'd lost after the original Athlons were passed up.GeoffreyA - Saturday, November 5, 2022 - link
The original Phenom was cache starved on the L3, I believe, and Phenom II doubled or tripled the cache, whichever is correct, throwing open the doors to its performance.dr.denton - Monday, December 12, 2022 - link
Another issue for Phenom 1 was the disastrous 65nm process and AMDs insistence on making the quad cores monolithic dies instead of going the Intel route and just slapping two dual cores together.Makaveli - Thursday, November 3, 2022 - link
The presentation started off really slow but scott really brought the energy and the guy after him what a finish. I was not expecting all those extra features and the price. I am for sure going to upgrade from my 6800XT I was going to go 7900XT but the 7900XTX for only $1,000 is a steal I may have to move up.Khanan - Friday, November 4, 2022 - link
The best moment was when he ridiculed Nvidia for the power connectors that are just terrible. Nvidia trying to be so Apple like, using small pcbs using small connectors led to this disaster.fybyfyby - Thursday, November 3, 2022 - link
I thought XTX will be around 1200 USD. RT could be higher but the cards are great!Kangal - Thursday, November 3, 2022 - link
I was expecting about USD $1300 and a -10% performance against the RTX 4090.With this USD $1000 launch, my expectations are a bit more dull. I'm expecting the XTX to trade blows with the RTX 4080, but overall be the better card. Minus the exclusive features of RT and DLSS-3.
nandnandnand - Thursday, November 3, 2022 - link
RT is not an exclusive feature.Khanan - Friday, November 4, 2022 - link
You base your expectations solely on price? So you’re one of those guys who think price is everything. Hahahaha… can’t tell you how many times I bough excellent products that cost way less than others, you couldn’t be more wrong. The 7900XTX is competitive with the 4090. For 600 less.temps - Friday, November 4, 2022 - link
Where do you guys come from? I don't understand how the tech review site with the most doctorates on staff has, by far, the dumbest comment section... competitive with the 4090, how? Looking at basically the one and only game that is indicative of where graphics are going (Cyberpunk), the 7900 needs FSR to equal what the 4090 does at native resolution... and once DLSS comes into the picture, its in an entirely different, lesser realm with absolutely no hope of making up the enormous gap.It is clearly a large step behind, which is why it costs far less. AMD priced it precisely where they needed to. There are no bargains in graphics anymore - you get exactly what you pay for.
Khanan - Saturday, November 5, 2022 - link
Where I’m coming from? I’m clearly higher iq and more informed than you are. There are already multiple reviewers that stated that 7900XTX maybe only 10% behind 4090. For that they used information gained by the official presentation of AMD and compared it to own data of 4090. Just because the 7900XTX isn’t costing you an arm and leg and has a normal price, doesn’t mean its performance isn’t competitive with the competition. It’s ironic how you attack me on an intellectual level while just being some average emotional dude who isn’t even able to be unbiased and informed. I saw at least 5 reviewers who said that the 7900XTX is probably competitive with the 4090.Khanan - Saturday, November 5, 2022 - link
I wanna add: you talk complete nonsense, there are no comparisons between AMD and Nvidia while using FSR and Nvidia being native. All these comparisons I’m talking about were made on the premise of no upscaling used. Other than that, we will see. End of story.catavalon21 - Saturday, November 19, 2022 - link
"...PROBABLY competitive..." That's the nut. Let's wait for half a dozen sites to do comparisons once hardware is on the street and perhaps have a better informed discussion? I don't implicitly trust ANY company's press releases to make informed decisions. I am HUGELY optimistic for this gen, having owned AMD and Intel CPUs, and AMD and NVIDIA GPUs. Let's see how these play out before we claim who is more informed than whom, since neither you nor I have seen one of AMD's cards independently tested. Thank you sir.Makste - Saturday, November 5, 2022 - link
I think that the comments are dependent on a person's perspective. They are not dumb. The rtx 4090 is untouchable to you because it satisfies the qualities you need in a GPU, while others have qualities they are looking for in a GPU which make the 7900XTX superior to the RTX4090. For example someone who wants to try out the displayport 2.1 is the same as the one who wants to try out Ray traced high refresh gaming. They each will choose a different GPU. And whichever way you slice it, the GPU they choose would be superior to the other option based on their preferences.Khanan - Saturday, November 5, 2022 - link
Yes and even RT, as long as the 7900XTX has enough to use RT with FSR activated in 4K, like 60fps, maybe a bit more, it is enough. RT is a single player Game Feature and you don’t really need high fps in SP games, 60 fps is easily enough and easily fluid enough. By that I mean 60f ps on a decent 144hz display with backlight strobing not some random monitor of course.dr.denton - Monday, December 12, 2022 - link
So, you buy a 1000$ GPU to run this generation's games at 60fps in 4k with RT. Fair enough.But surely someone as smart as you will understand that coming games will be more demanding, so a GPU that can JUST hit this target now, will fall below that in a year or so?
Bruzzone - Tuesday, December 13, 2022 - link
As long as there is no 80ti 102 Nvidia and AIBs can sell every 4090 they make to government labs and institutional customers at broker market price and some of those customers don't even want the card just GPU to implement in their own designs. 103 is an oddball at 397 mm2 and 104 at 295 is the mass market product. mbcatavalon21 - Tuesday, December 20, 2022 - link
Link to reviews to support such a claim, assuming you mean "competitive" in terms of performance in games, Compute, and Ray Tracing (all of which are part of the 'package')?Zoolook - Friday, November 4, 2022 - link
Interpolation isn't really anything to brag about, I hope AMD isn't jumping on that shit-train. This is starting to remind me about the old days with worse and worse variants of AA introduced for every new generation and the game.exe and 3dmark.exe id to game perf.I wish Nvidia at least once could focus on improving visual fidelity instead of inventing new ways to sacrifice fidelity for speed.
Khanan - Saturday, November 5, 2022 - link
Upscaling is mainly important for using RT, though there are some games that will have enough fps anyway, if you can manage to play with 60fps no upscaler used. I don’t see the issue, oldschoolers whining about upscalers need to grow up, as long as the quality is decent it doesn’t matter whether it’s native or with upscaling.Oxford Guy - Sunday, November 6, 2022 - link
According to analysis at tftcentral, 8K is useless on the desktop for gamers. If this card is targeting 8K that implies, but doesn’t guarantee, good 4K performance. That implies less need to switch on image quality reduction features to obtain performance.Of course, games that use RT on invisible grains of sand under buildings and tesselate meticulously undulating water within solid walls will be too demanding for parts not designed to remove those anchors.
Bruzzone - Tuesday, December 13, 2022 - link
"I wish Nvidia at least once could focus on improving visual fidelity instead of inventing new ways to sacrifice fidelity for speed." interesting point so noted. mbTallestJon96 - Thursday, November 3, 2022 - link
This was an interesting and fairly completive launch. $899 and $999 at the top end bodes well for well priced 7800 xt and 7700 xt type products.It's interesting to see that AMD has also made the switch to a "double FP32" architecture the same way NVIDIA did with Ampere. On the other hand, the ray tracing claims are fairly lack luster, and you have to wonder if the performance increase will be a full 50% or not at 1440p. They claim 50% better ray tracing, but that just keeps up with the raster. RDNA 3 needed to be about twice as fast in RT to begin competing with Lovelace.
Overall, it seems AMD will play the same roll in the market as they have this last generation. They are not the out right fastest, FSR 3 will lag behind DLSS 3, ray tracing is a sore spot, but they are priced well and have better efficiency. I'm hoping the can convince me to upgrade from my 3060 ti to something worthwhile for <$499 next year.
Thanny - Thursday, November 3, 2022 - link
You're doing it wrong. RTRT performance isn't the final frame rate. It's the relative performance of RTRT versus raster only. Take a hypothetical game that runs at 100fps without any RTRT effects. Turn on RTRT and it's at 30fps with RDNA 2. 50% better RTRT performance means it will now run at 45fps on the same speed hardware. Increase the original raster performance to 150fps, and the RTRT performance is now 67.5fps instead of the 45fps that it would have been without the uplift.Otritus - Thursday, November 3, 2022 - link
According to AMD’s OWN numbers, there is a 47-84% performance uplift in RT. Also according to AMD, the flagship is 70% faster. Meaning RT performance is not improved relative to raster, and may actually be slightly worse. This is in comparison to Nvidia that had a RT lead and actually improved it gen over gen. @TallestJon96’s analysis is correct.haukionkannel - Friday, November 4, 2022 - link
Rasteration unit amount has increased more than RT-units, so yeah. 1.5 more rasteration, but relatively less about 1.2 RT units.neblogai - Friday, November 4, 2022 - link
AMD's RT results are with FSR- which means it is rendered at lower resolution. But we know, that 6900XT performance dropped of at 4K, while 7900XTX performance should be more consistent there (way higher memory bandwidth)- thus, advantage over 6900XT may be lower at lower (FSR) resolution where 6900XT is not limited. So, we do not really know what the new RT performanceis yet.Khanan - Thursday, November 10, 2022 - link
*50-70% more Raster not 70%. Same with RT, 50-80%. So I suspect that RT perf is highly dependent on the game if it has a higher percentage of gain compared to raster or just on the same level. In any way, it could be that RT perf is just high enough to be usable in all games, if you can do with upscalers and 60-80fps in 4K. And that is what is important and nothing else.Makaveli - Thursday, November 3, 2022 - link
AMD was never going to jump two generations in RT performance on this launch. And I think that is a clearly unrealistic expectation.NV 3000 series RT performance is what we are getting which is fine for me at that price!
Khanan - Friday, November 4, 2022 - link
Omg Makaveli achieved to make a fair comment. For once.Kangal - Saturday, November 5, 2022 - link
Why so toxic?Khanan - Saturday, November 5, 2022 - link
Ask him.AdrianBc - Friday, November 4, 2022 - link
Regarding the switch to doing two FP32 operations per clock cycle, I am annoyed that in the specifications that AMD has published on their site for the 2 announced cards, there is no mention about which is the throughput ratio between FP64 and FP32 in RDNA 3, i.e. which is the FP64 throughput of the new GPUs.During a decade, most AMD gaming CPUs had a FP64:FP32 throughput ratio of 1:16, which was better than the 1:32 ratio of the NVIDIA GPUs.
Now there have been rumors that in RDNA 3 this ratio has been lowered to 1:32.
The rumors are plausible, because that is what would happen if in RDNA 3 the FP32 execution units have been doubled, but not also the F64 execution unit.
If that is what they did, they should have announced it. There are some people like myself, for which the FP64:FP32 ratio is important. If they would have kept the old ratio, a Radeon 7900XT GPU would have been twice faster than a Ryzen 9 7950X CPU for a not much higher price.
In that case, I would have bought an AMD GPU. If they have reduced the FP64:FP32 ratio, then a Radeon 7900XT GPU would have only about the same speed as a Ryzen 9 7950X CPU, but at a higher price and a higher power consumption, so it would not be a competitive alternative.
Bruzzone - Tuesday, December 13, 2022 - link
It's AMD on a shoe string. You get a specification, implemented in hardware, and some basic proofs of concept none of which can be called an application, compatibility or performance report subject whole platform proofs because all those are kept secret by their developers. mbCharizzardoh - Thursday, November 3, 2022 - link
Don't understand why there's only 100$ difference between the 2Mr Perfect - Thursday, November 3, 2022 - link
I'm also wondering why they didn't use the 7950 XT name for anything. There have been zero rumors of anything bigger, but they're also being coy about the name of this die.Otritus - Thursday, November 3, 2022 - link
AMD has the ability to stack infinity cache and use higher clocked memory. Pump the card to 400-500 watts and there is your 7950XTX. I still think they should have done 7950XT, with an XTX refresh or resurrect the 7970 branding. Maybe a 7970 3Ghz edition would be cool.nandnandnand - Thursday, November 3, 2022 - link
Otritus got it. There have been rumors that said that AMD can do a "1-hi" stacking of Infinity Cache, allowing a doubling to 192 MB. Then there will be considerably faster GDDR6 memory available within months. They could also bin the GCD and raise clocks. So there is easily room for AMD to create a 4090 Ti competitor.haukionkannel - Friday, November 4, 2022 - link
Yeah. Use more power, for little uplift in speed!It is possible, and maybe only AIB version in the next year.
BushLin - Saturday, November 5, 2022 - link
From what's available via leaks, the launch RDNA 3 cards are not as far up the curve of diminishing returns as the 4080/4090; which could have easily been cheaper to produce, lower wattage cards with similar performance... and fit into regular cases.Otritus - Thursday, November 3, 2022 - link
I am intrigued here as well because those 4GB of VRAM may be worth $100 to some people. If they are saying 0 VRAM value, then that 11% price gap is like a 5% performance gap at the high end. The clock speed and bandwidth reduction indicate a larger than 11% drop in performance, so I am quite confused at RDNA3 performance scaling.Bruzzone - Tuesday, December 13, 2022 - link
Slow bus? Poor latency? mbnandnandnand - Thursday, November 3, 2022 - link
Same, it could conceivably make the XTX a better value than the XT.That $100 covers a whole MCD, higher quality GCD, and 4 GB more VRAM.
If I had to guess the XT will drop in price or the XTX will trend up (AIB models or whatever).
Bruzzone - Tuesday, December 13, 2022 - link
6 x 16 MB SRAM an open multi source memory component $25 to $30 max. mbMTEK - Friday, November 4, 2022 - link
The $100 difference is a marketing trick that works particularly well when presented with exactly two choices. They know if you're spending $900 on a graphics card, you have an extra hundred bucks laying around.Threska - Friday, November 4, 2022 - link
Even someone working their McDonald's job could make up that difference in less than a week.Zoolook - Friday, November 4, 2022 - link
It prob reflects the yield, if the yield is very good, they want to sell as many XTX as possible and not have to use fully working dies to build the XT.What would be fun if they release a dual card with two GCDs, it seems they have the tech to do it packagewise, the issue would be the power requirements.
Might be they will do that first as an Instinct card, but it's clearly where they are going.
Khanan - Saturday, November 5, 2022 - link
There will not be 2GCD this Gen as it seems they haven’t solved the issue of GCD to GCD communication yet. The issue is it needs vast amounts of bandwidth and good latency to communicate otherwise it will not behave like a single gpu but that is what you want. Maybe next Gen or the Gen after that then.Threska - Thursday, November 10, 2022 - link
They know how to do bandwidth, it's cost that would need to be dealt with.https://youtu.be/4r_SwjYogQE
Khanan - Thursday, November 10, 2022 - link
That’s not a gaming card and these dual gpu boards are handled as 2 GPUs by the system so not what you want for a gamer gpu as Crossfire and SLI are dead and not good enough. Instinct isn’t Radeon.Bruzzone - Tuesday, December 13, 2022 - link
v640 620 mbBruzzone - Tuesday, December 13, 2022 - link
v760 740 dual. mbBruzzone - Tuesday, December 13, 2022 - link
Oh the VRAM difference 24 v 20, $40. mbAdrianBc - Saturday, November 5, 2022 - link
Both NVIDIA and AMD have set prices for their second best card that provide a much lower performance per dollar than their top card.I assume that this is a win-win from their point of view. The buyers who want the best efficiency for the money they spend will be lured to buy the most expensive card, while those who cannot afford the greatest price will be forced to buy the cheaper card, which nonetheless is more profitable for the vendor.
The Radeon 7900XT has about the same FP32 TFlop/s value as RTX 4080, but about the same $ per Tflop/s value as the less overpriced RTX 4090.
On the other hand the top model Radeon 7900XTX, has about 77% of the FP32 throughput of RTX4090, but only 62% of the price, so its performance per dollar is about 25% higher than that of RTX 4090.
Obviously, AMD had to offer much better performance per dollar, because otherwise their cards would not have been competitive with those of NVIDIA, which have much better software support.
Silver5urfer - Thursday, November 3, 2022 - link
AMD really is going super conservative here. The entire GCD is just 300mm2, the AD102 is 600mm2. Yes there's Chiplet Cache but still, and all that at 350W. They should have made bigger dies and even clocks are super conservative again vs Nvidia, maybe they just want to keep the cards BOM and overall packaging smaller. The reference card is barely 3 slots lol, it's 2.5 nice move.Coming to the ILP, also finally good to see AT PR it helps a lot to understand. So yeah ILP is new thing, it screams just like Nvidia's new FP32 throughput calculations too extremely bloated and non comparable to other Archs and their own designs. The most interesting is why AMD put AI and not using them for anything related to their GPU performance. I think AMD might pull a Xilinx move soon, I really hope they do.
Now moving onto performance and Price looks like the 1.7X slide on 6950XT vs 7900XTX is at native res no FSR or other gimmicks that translates to a good hefty jump, Plus given the fact 6900XT is as low as $650 now. Super value cards. Also that perf means AMD will match / lose maybe 10-20% depending on title with 4090 at 60% lower cost and no BS 12 Pin nightmare.
RT is low it barely matches and beats a 3080. So RT is still Nvidia's ground, not a surprise. But the fact that AMD is able to match the Ada at significantly lower power is superb. They should have gone with GDDR6X to improve even more bandwidth to get more performance out. Also on RT it's a garbage tech, introduces huge GPU penalty and performance drop forces to use Upscaling BS, way better with pure Rasterization just look at Crysis 3 with it's superior no-TAA solution and ultimate Tessellation with insane GFX fidelity, it's been 10 years and that game has SLI support too, 3090 SLI = 2X FPS at 4K+ Shame it's all gone for the new garbage Upscaling, TAA and the Frame Interpolation (more below).
The added feature set like AV1 is a super welcome move, also perhaps it will improve the P2P Trackers to finally update their encodes to AV1 from H264. I hope it happens that makes super nice but since nowadays encodes are rare, Scene is already at all time low with no many trackers only Anime may improve but that also have low fansub and top encoders are also thinning down. Anyways it's a good package.
Now the elephant in the room is pricing. $1000 7900XTX vs $1600 4090, RDNA3 rips out Nvidia without a doubt. Not a surprise how Nvidia axed 4080 12GB rip off edition lol it got eaten by AMD and it gets beaten by a 3090 too in VRAM buffer and performance at Rasterization too what a joke.
Finally what is that AMD Fluid Motion technology ? No word on that from AT probably NDA or something. Whatever it is I hope AMD is not going to add fake frames to the game like Nvidia BS DLSS3 and pumps latency along with insane GFX fidelity downgrades, Nvidia tech adds DLSS issues and Frame Insertion artifacts + latency total loss. Really hoping AMD doesn't do it, I wonder if they do and give it to Ampere cards, totally Anti-Climatic move since NV cards already have Reflex (AMD had low latency too). Also AMD's new Hyper-RX more lower latency tech.
Kangal - Thursday, November 3, 2022 - link
Completely agree.The only caveat is that this is a new style of computing (dual-GPU/chiplets), sort of, it is likely we will see some software issues. Now wether these are tiny bugs that aren't important, or if they're bigger annoying ones is anyone's guess.
But in time, they will get better at building these and the software will mature. Then they can go all-in by using better nodes, higher clocks, more cache, larger bus, faster memory, etc etc. Then Nvidia will be in trouble, without having some sort of technological advantage.
It's really similar to how things played out in the CPU space, with Zen1 getting back into the game. With Zen2 (imho) having the lead. And then Zen3 kind-of winning (Intel i-10 and Intel i-11) and losing (Intel i-12). The RDNA-1 cards like 5700XT got back into the race. The RDNA-2 cards like RX6800 was the clear choice (with RT not being important yet). And now RDNA-3 likely winning against the current and previous options, but potentially losing against the new options (RTX 4090Ti, 4090, 4080Ti, 4080).
I bring this up, because it just exposed Nvidia. They UN-launched the "RTX 4080" because how embarrassing it was. The RX 7900XT at the same price completely trashes it. Nvidia thought they could use a 4060Ti-4070Ti card, and ridiculously bump up the price (and name). They did this in the past and got away with it, but this time there's competition. I guess their spies in AMD didn't give them accurate information ; )
Khanan - Friday, November 4, 2022 - link
All Nvidia cards are overpriced, especially the 4080 with 500$ price hike compared to its predecessor that also had the big chip and not just the semi big chip. Nvidia devolved into a terrible company these days just fixated on money. The new old Intel.Oxford Guy - Sunday, November 6, 2022 - link
They’re all ‘fixated on money.’Threska - Monday, November 7, 2022 - link
Right. It's a matter of scale. "Fixated on money" taken to extremes could give us...the current NVIDIA situation. A little more thought, and a little more money could have reduced the chances of bendgate.Bruzzone - Tuesday, December 13, 2022 - link
cost : price / margin is understood. mbnandnandnand - Thursday, November 3, 2022 - link
24-27 Gbps GDDR6 will be available to AMD if they want to make a 7950 XT(X) next year. They don't need to use GDDR6X and they can probably produce a decent response to a 4090 Ti. I assume that doubled Infinity Cache on a 7950 XT could lessen the need for more memory bandwidth anyway.What is the encoding efficiency of AV1 compared to relatively widespread H.265 these days? I remember torrenting H.265 videos years ago, even if H.264 is still the popular choice. I just hope that AV2, which is being worked on, will be able to keep H.266 from gaining traction, with faster adoption from being built on AV1. Remember that Google's plan was to update to VP10 and future versions every 18 months.
I didn't expect AMD to increase VRAM to 20-24 GB before the rumors popped up recently. A VRAM war between AMD and Nvidia is nice because it will put better hardware for AI image models like Stable Diffusion in the hands of consumers. Maybe one of them will go to 32 GB within the next 1-2 generations.
GeoffreyA - Friday, November 4, 2022 - link
AV1 is in H.266/VVC's league, but lower in efficiency and not as sharp. At any rate, it's excellent and superior to H.265. The problem with AV1 was its snail-like encoding speed, but libaom has sped up over time and Intel's SVT-AV1, which had speed but deplorable quality, is usable nowadays and comparable in quality to libaom.As for H.266/VVC, it's currently the king but largely unaccessible. Franhofer's encoder is available but difficult to use, and x266 is nowhere to be seen.
Makste - Saturday, November 5, 2022 - link
It's a great summary that you have going there on the current encoding environment. Thanks.GeoffreyA - Thursday, November 10, 2022 - link
I'd just like to add on to my original comment, owing to a bit of VVC testing I did this week after a long time. The gap between AV1 and VVC is actually smaller than I had thought. Indeed, at least for the low-motion, 1080p clip I tried, one can't really tell the difference, glancing roughly. Only on closer inspection is it evident that VVC is better, but only slightly. This is with Fraunhofer vvcenc 1.5/1.6.1 medium preset; slower does improve quality but speed is terrible. Seems open-source has caught up with the patent crowd. As for x265, on very low bitrates, say < 200 kbps, it collapses, whereas AV1 and VVC hold up remarkably well.In short, run far away from x265, and go for AV1 if compatibility permits, specifically libaom. H.266/VVC is only millimetres ahead of AV1.
Zoolook - Friday, November 4, 2022 - link
I'm not sure if you will get a memory war, AMD has almost always been more generous with memory, only when they were capped by their own hardware did they lag behind, (4GB cap for the pacific island chip).Zoolook - Friday, November 4, 2022 - link
Chips (plural), Tahiti, Hawaii etc.Oxford Guy - Sunday, November 6, 2022 - link
4 GB cap was only for Fiji, due to the HBM1.TheinsanegamerN - Monday, November 7, 2022 - link
Hawaii wasnt limited tho? It had 4GB, which was plenty for the capability of said GPU. There were also 8GB versions.nandnandnand - Friday, November 4, 2022 - link
I just don't think it's going to stop at 24 GB for consumer cards. And the addition of a 20 GB model could force more Nvidia cards to go above 16 GB. This will have interesting effects in the long run.Bruzzone - Tuesday, December 13, 2022 - link
Memory as in cache solves a lot and hides a lot. mbGeoffreyA - Friday, November 4, 2022 - link
"The added feature set like AV1 is a super welcome move"True. But in practice, what sort of quality will these hardware AV1 encoders yield? Usually, not very good. One might achieve superior compression/quality using software encoding, say, with x265 or SVT-AV1. So, really, in my books, this is more marketing than benefit, though decoding is welcome.
Nowadays, there are an increasing number of encodes in proper AV1 (aomenc, etc.) and their quality is fantastic. Anime, too, works wonderfully because softness is more acceptable there. I would say the problem with such encodes is that they might play on many people's TVs, AV1 support being rare, and Opus. And for some low-end devices, 1080p AV1 really struggles.
GeoffreyA - Friday, November 4, 2022 - link
* might not playSilver5urfer - Friday, November 4, 2022 - link
Interesting comments on Encoders. I only knew that AV1 is painfully slow, other than that not much. Also I was just assuming it would be good maybe I should have added a few points... Also one more thing is Google and Streaming corporations want to avoid royalties that's why they want to push their OSS Codecs but the Entertainment industry is knee deep in these patents and etc. So I do not think 4K BDs will never use AV1 but rather stick to the HEVC and it's successor only.Also I think it's just more for those "Streamer audience" that AMD is targeting towards, like NVENC. In the end we just encode using CPUs only an important point which I failed to mention.
Threska - Friday, November 4, 2022 - link
I wonder how much of this supposed "superior" is because of humans being in the loop?GeoffreyA - Friday, November 4, 2022 - link
Ultimately, the humans are the ones watching the video.Threska - Monday, November 7, 2022 - link
Yes they are. The point being is that usually the best encoding results are when humans are adjusting the settings. AI may eventually get there.https://www.digitalmusicnews.com/2022/11/03/meta-a...
GeoffreyA - Tuesday, November 8, 2022 - link
Yes, I agree, AI will take over this domain and is already doing so. This past week, I did come across Meta's new "encodec," which beats even Opus, and whose design appears very different. I look forward to the boffins at Hydrogenaudio coming up with a listening test on this one.With respect to our current video codecs, it certainly seems a struggle each time they are improved. No doubt, AI will open new avenues for increasing compression. Perhaps a lot is to be gained by some sort of global compression, whereas today's encoders work on a small locality.
GeoffreyA - Friday, November 4, 2022 - link
Solid points, and you're right, especially that the BDs, and future film discs, will never use open-source codecs, and that these hardware encoders are aimed more at the streaming folk.GeoffreyA - Friday, November 4, 2022 - link
Lamentably, "fake frames" appears to be the way the industry is going these days. We'll just have to wait for some renaissance in 3D that resets the current nonsense in games and takes fidelity to the next level. Some new paradigm is needed.Bruzzone - Tuesday, December 13, 2022 - link
Huffman coding on motion estimation has always and by its very nature fakes frames. mbMr Perfect - Thursday, November 3, 2022 - link
Is this the first time we're seeing defective chiplets used as place holders instead of going in the trash? Seems like a cost effective way to balance out a package.Ryan Smith - Thursday, November 3, 2022 - link
No. In the CPU world, AMD will ship CPUs with defective CCDs. In the GPU world, NVIDIA has been doing it for ages with defective HBM stacks (technically this is logic silicon as opposed to DRAM, but the idea is the same).nandnandnand - Thursday, November 3, 2022 - link
Tom's Hardware:AMD Recycles Dual-Chiplet Ryzen 7000's as Ryzen 5 7600X CPUs, Again
"inquisitive users found both Ryzen 7 5800X and Ryzen 5 5600X CPUs were rocking two CCDs, but just one was being used and was necessary.
The approach isn't a secret, or new. AMD was fully transparent about this aspect of its production strategy when Tom's Hardware's deputy managing editor asked about it last year."
The Von Matrices - Thursday, November 3, 2022 - link
Most manufacturers making interposers sell some products with defective chiplets. While you can test the chiplet by itself to ensure functionality, there is a non-negligible failure rate in bonding the chiplets to the interposer. That means you get some interposers with bad connections to chiplets. Then you have to either discard the entire interposer or sell products with bad chiplets.AndrewJacksonZA - Thursday, November 3, 2022 - link
So that's R20k for the 7900XTX and R18k for the 7900XT.Geez. That's 3080 money (in South Africa) for Lord knows how much more performance. Wow.
Also, the dual SIMD might turn this into a compute MONSTER for cryptographic hashing and distributed computing like the BOINC or dnetc research projects. <3
Harry_Wild - Thursday, November 3, 2022 - link
AMD kept the wattage the same as previous versions: 355 and 300 watts but up the performance 2 to 2.5 times previous models.nandnandnand - Thursday, November 3, 2022 - link
6950 XT was 335 Watts, not 355. So the XTX uses a little bit more power. The 6900 XT indeed was 300 Watts. There could be a lot of room to overclock these.Harry_Wild - Friday, November 4, 2022 - link
Thanks for the correction!Da W - Thursday, November 3, 2022 - link
I think they will put 2 Navi 3X on a single package for the 7950xtx, kinda like they did with vesuvius.nandnandnand - Thursday, November 3, 2022 - link
My bet is doubled Infinity Cache with 3D stacking, faster GDDR6, and higher clocks. They will save complicated multiple-GCD configurations for RDNA 4 in 2024.LiKenun - Thursday, November 3, 2022 - link
No graphics card for me then… Would it have killed them to have PCIe 5.0 with 8 lanes or even 4 lanes since PCIe 3.0 with 16 lanes still performs 98% as well as PCIe 4.0 with 16 lanes?nandnandnand - Thursday, November 3, 2022 - link
Presumably you think that there will be another PCIe 4.0 x4 GPU this generation like the 6500 XT. Maybe they don't even release something for that segment and just sell cheaper RDNA 2 GPUs like the RX 6600 to fill the low-end. They can even continue to manufacture them if they want to.LiKenun - Thursday, November 3, 2022 - link
I was specifically commenting on this:<blockquote>Finally on the subject of AMD’s GPU uncore, while not explicitly called out in AMD’s presentation, it’s worth noting that AMD has not updated their PCIe controller. So RDNA 3 still maxes out at PCIe 4.0 speeds, with Big Navi 3x offering the usual 16 lanes. This means that even though AMD’s latest Ryzen platform supports PCIe 5.0 for graphics (and other PCIe cards), their video cards won’t be reciprocating in this generation. In fact, this means that no one will have a PCIe 5.0 consumer video card.</blockquote>
On other news sites though, they mentioned that the official specifications mention PCIe 5.0 with 16 lanes. I guess we’ll have to wait a bit for definite answers on this.
nandnandnand - Thursday, November 3, 2022 - link
TechPowerUp says 4.0. Guru3D says we should assume 4.0 because 5.0 was not mentioned in the presentation. Wikipedia says 5.0 but the article is not even done being written and doesn't source it. Wccftech says 5.0. AMD's website says nothing about PCIe on the press release or the product pages.But back to your original point. These are 16-lane cards, so no problem there. What you have a problem with is some hypothetical PCIe 4.0 x4 7500 XT, right?
The Von Matrices - Thursday, November 3, 2022 - link
A value priced GPU like the 6500/7500 will be not be put into a high-end motherboard making the discussion of PCIe 5.0 support a moot point since mid-range and value motherboards do not support PCIe 5.0.Ryan Smith - Friday, November 4, 2022 - link
To clarify, there was a Q&A session immediately following the recording of the presentation. It was there where someone (not me) asked about PCIe support, and we were told PCIe 4.0. So that is the source of those claims.LiKenun - Friday, November 4, 2022 - link
@Ryan Smith: Thank you for the clarification.@The Von Matrices: Main issue is that PCIe 5.0 lanes are limited on consumer motherboards so every device running at half or a quarter of the maximum speed is wasting half or three quarters of the lanes it’s taking up. Benchmarks on other sites show the Nvidia 4090 to be performing 98% as well on PCIe 3.0 x16 and 92% as well on PCIe 2.0 x16. Assuming the performance penalties translate over to Radeon, if were PCIe 5.0 x8 then it’d allow us to run it with the equivalent performance of PCIe 4.0 x16 with 0% performance penalty. Those with PCIe 4.0 boards would incur a mere 2% performance penalty with a hypothetical PCIe 5.0 x8 card. But because the card is actually PCIe 4.0, running it at x8 would incur a 2% penalty. And those of us who really need to maximize PCIe lane utilization might have to boot the graphics card to a x4 slot, incurring an 8% performance penalty.
Chaser - Thursday, November 3, 2022 - link
All we're getting is "faster than a 3090Ti"?nandnandnand - Thursday, November 3, 2022 - link
70% better raster is about what the 4090 did. These are 4090 and 4080 competitors.ToTTenTranz - Thursday, November 3, 2022 - link
" Of particular note, both cards will feature, for the first time for an AMD consumer card, a USB-C port for display outputs."All reference Navi 21 cards had a USB-C port with the same alt-mode.
It should quite convenient for future VR headsets, assuming it passes USB3 and USB-PD as well (it should, since it's part of the spec).
nandnandnand - Thursday, November 3, 2022 - link
Future VR headsets should be standalone, or use WiGig 2 (802.11ay) or better to connect to a nearby desktop. Connecting cables to a VR headset is caveman stuff, with the possible exception of a battery extension that you can secure to your body.ToTTenTranz - Friday, November 4, 2022 - link
It depends on what you mean by "future VR headsets".Future VR headsets on 2023 to early 2025? Probably not, as most of them will be wired anyways. WiGig 2 probably eats a lot of power so it requires large batteries, making the headsets a bit heavy, and realistically most VR experiences are seated anyway. Starting with PSVR2 which could be the most popular headset for its time like PSVR1 was in 2016-2018.
Future VR headsets in 2025-2028? You could be right.
Future VR headsets 8+ years from now? Probably not. At that time we're probably going to be using AR headsets that double as VR on demand.
Regardless, GPU makers develop and market their products for a 2-3 years lifecycle, so having an USB-C output for wired VR headsets makes sense IMO. Just like Sony did it on the PS5.
Threska - Sunday, November 6, 2022 - link
Not just VR. Some graphic display tablets (huion) use a USB-C connection.TheinsanegamerN - Friday, November 4, 2022 - link
Nothing like strapping a 25kg battery to your back to avoid the use of 1 5 ounce cable.nandnandnand - Friday, November 4, 2022 - link
Tethered VR is junk, and the battery life could probably double with a 0.5 kg battery.Ryan Smith - Friday, November 4, 2022 - link
"All reference Navi 21 cards had a USB-C port with the same alt-mode."You're right! That's a massive brain fart on my part. Thank you.
Khanan - Saturday, November 5, 2022 - link
Shit happens. Some premium custom models had them too. I suspect it will be the same with RX 7000 Gen.Mat3 - Thursday, November 3, 2022 - link
"This is a notable change because AMD developed RDNA (1) in part to get away from a reliance on ILP, which was identified as a weakness of GCN"What? That is so wrong. When AMD moved to GCN, that's when they stopped trying to extract ILP. GCN does not do ILP in wavefronts at all, each lane in a SIMD is single issue, while it was the prior arch before GCN (VLIW) that was all about extracting ILP.
Khanan - Saturday, November 5, 2022 - link
Yes, the issue of GCN was it had 64 issue waves, RDNA solved this by having 32 instead which is more flexible and thus higher utilization of the engine and better perf/W.Yojimbo - Saturday, November 5, 2022 - link
Doing a little searching it seems that RDNA already increased reliance on ILP over GCN in exchange for being able to keep the compute units fed with fewer threads.Dribble - Friday, November 4, 2022 - link
For $1000 they could at least have used DDR6XTheinsanegamerN - Friday, November 4, 2022 - link
DDR6X is an nvidia/micron exclusive deal.Khanan - Saturday, November 5, 2022 - link
Isn’t needed when you can use G6 with 20Gbit instead which is competitive with G6X. Samsung has already talked about G6 with 22 and 24 Gbit as well, so G6X isn’t that relevant. It’s prototype tech that will probably be superseded by GDDR7 sometime in the future when G6 isn’t cutting it anymore.Zoolook - Saturday, November 5, 2022 - link
Not to mention that the GDDR6X is a powerhog, it would have had a noticeable effect on the TBP.TEAMSWITCHER - Friday, November 4, 2022 - link
I certainly don't need to buy a new Video Card every generation.. and this is clearly the generation to skip... See you all in another two years.yeeeeman - Friday, November 4, 2022 - link
it might not be a chart topping product, but DAMN, did AMD engineered something amazing with this. It's just an engineer's dream frankly.Hrel - Friday, November 4, 2022 - link
A thousand dollars for a GPU... I'm still over here unwilling to pay more than $200...I'd be pretty upset about this if there had been any good games released recently. I guess it's good the gaming industry is dead because gpu's are entirely unaffordable.
TheinsanegamerN - Monday, November 7, 2022 - link
GPU's at the $1000 level have been available for over a decade, stop being a dramaqueen.Bruzzone - Friday, November 4, 2022 - link
Artificially low MSRP giving the channel pricing making control. mbOxford Guy - Sunday, November 6, 2022 - link
Thank you. I was just going to snark about everyone taking the MSRPs so seriously.catavalon21 - Friday, November 4, 2022 - link
I will be interested to see how these perform on double precision which is advertised to churn at 1/16 the s.p. performance (whereas NV is again at 1/64 with this gen)Dizoja86 - Friday, November 4, 2022 - link
I guess we'll see how these turn out. Ray-tracing is not an optional feature at this point, so at these price points AMD better at least be hitting 3090 levels of performance with maxed RT settings or else buyers are going to be stuck with tech that's outdated upon release. And hopefully FSR 3.0 is a better competitor for DLSS 2 from a visual standpoint than FSR 2.0 is.There were good reasons why RDNA 2 was so unpopular in the desktop PC space. I'm really hoping AMD is more competitive this generation, for the sake of people buying Nvidia as well.
Khanan - Saturday, November 5, 2022 - link
I don’t see your negativity. FSR 2.1 is easily good enough to compete against DLSS. RT will be 50-70% better with new Radeons, should be enough, of course not on the level of 4090 though, they are still one Gen ahead and it seems they invest more.Dizoja86 - Sunday, November 6, 2022 - link
I'm glad you don't see negativity, as there wasn't any. This is a tech site, so we try to be honest about where hardware stands, and AMD hasn't truly been competitive in the high-end GPU space for many years now.And no, FSR is not easily able to compete against DLSS, unless you're strictly referring to performance. Nobody who has done a deep dive into the image quality would say that FSR is up to par with DLSS, which is to be expected as current AMD cards don't have the hardware to drive more advanced image reconstruction features. It's still better than DLSS 1.0 by a significant margin, but nobody who had the option to choose one or the other would choose FSR over DLSS 2.x.
Again, I hope AMD really pulls through this time and justifies those price points without the caveats that they can't compete with modern Nvidia when it comes to common and widely used rendering methods. Rasterization alone is not good enough. Ray-tracing matters, image reconstruction matters, and other software features matter as well when it comes to justifying the prices of GPUs.
Threska - Sunday, November 6, 2022 - link
Your "deep dive" sounds very subjective.Kangal - Sunday, November 6, 2022 - link
Hardware Unboxed did a pretty thorough analysis or DLSS 1, DLSS2, FSR 1 and FSR 2. The findings were that AMD although late to the game is pretty competitive on the software front. And they'll just keep getting better and more dominant wih the support from Microsoft (Xbox) and many third-party developers (PlayStation) since the consoles rely on their technology.I'm not a fan of Khanan, but he is correct on this one.
Also, AMD has been competing in the high end, you just haven't noticed. It's not them it's you. Like the fast RX 6900XT and RX 6950XT. Basically the whole RDNA-2 product line was more impressive to me. Compared to the big, hot, and thirsty RTX 3000 cards, which not only skimped on RAM but charged a premium. Then there was the likes of Vega VII and Vega64 which had poor launches compared to the GTX 1080Ti and GTX 1080. But years later, we actually see AMD cards take the lead since they had the superior hardware and their software has only began to catch-up. Same thing happened to the R9 290X versus the GTX 980, or the HD 7970 versus the GTX 680. Even their other cards like the RX 480 and RX 5700x have aged better than their competitors like the GTX 1060 "Ti" or the RTX 2070-Super.
Dizoja86 - Monday, November 7, 2022 - link
That's a pretty strange take, really. Hardware Unboxed did not do an in-depth look at image reconstruction. Motion handling is a key metric of image reconstruction and they really did the weakest job of assessing common weak points, which is strange as I generally expect better from them. The lack of dedicated hardware for reconstruction is just not something that can be fully overcome no matter how much work is put into the software.Also, why would you think the RDNA2 architecture was more impressive than the 3xxx series? Serious question, as it had last gen hardware features compared to the competition and lagged significantly with software as well. The ram difference also was not an issue outside of AMD-sponsored titles.
Old AMD cards aging better than Nvidia also isn't a great point, but, yes, AMD hardware has tended to increase in performance after release due to rougher drivers. That's not really a great marketing angle, but it's not as much of a thing now that AMD's drivers are finally getting closer to par.
Again, I really want AMD to perform well and be a genuine competitor at the top of the charts (which even AMD has admitted they're not doing right now). It's a tough battle for them as their R&D budgets have been so much lower than Nvidia's on the GPU front, but I'm really hoping that has been changing with the success of Ryzen and the sales they've been making on the console side. I've historically bounced back and forth between companies up to the 290 which was my last AMD card, and would love to once again have more than one option for cutting edge graphics tech each generation.
Kangal - Monday, November 7, 2022 - link
Not so much the RDNA-2 architecture, which was good, but moreso the lineup of cards.I just found them to offer better performance at lower prices, compared to Nvidia's RTX-3000 series. Obviously they didn't have DLSS 2.1, RT performance, NVENC, and other advantages. And to top it off, Nvidia was using a lesser node, which just speaks to the strengths of their architecture.
I still don't think RT is important for gamers just yet, but the next generation it likely would be more mainstream and perhaps have a larger visual impact.
Hardware Unboxed didn't have specialised tools to decode the software and analyse from the backend. What they did was to pixel peep, and judge the final result. Which I deem acceptable, but we can disagree on this point. Perhaps you have another review that has done a better examination?
I'm not sure exactly how the future will pan out. From my understanding, chiplet design means more complexity (up front), slightly lower performance, and slightly higher power draw. So we might see AMD become aggressive in the dGPU market but Nvidia may always hold the crown at the top performance, and hold the efficiency lead in laptops. AMD meanwhile could hold most of the market share in the Desktop PC space, simply by offering faster and cheaper cards in the (RTX xx80 - xx60) middle.
They AMD finally have some breathing room for R&D thanks to multiple revenue streams: shareholders, servers, consoles, CPUs, GPUs. And each are competitive in their fields and are generating profits which cycle back into the business.
Khanan - Thursday, November 10, 2022 - link
If I have ever seen a delusional Nvidia fan, it’s you. You’re easily talking down all upsides of AMD while talking up all weak sides of Nvidia. Laughable that you’re pretending to be neutral. I’m using Nvidia since many many years now btw, and still not a toxic fan like you are. I’m the paragon of neutrality, so I know that you aren’t anywhere near the mind space I am in. You’re not neutral you’re a hard Nvidia fan. I’m the opposite, I see all down and high sides of either company and of all GPUs, despite not owning a Radeon since 2014.TheinsanegamerN - Tuesday, November 8, 2022 - link
AMD fans need to get this through their heads. It's not a good thing that the likes of the vega 64 are at their apex today.Are they selling them today? Nope. So it doesnt matter.
Taking 6 years to optimize your hardware is atrocious, by the time it matters your hardware is obsolete, everyone has moved onto newer generations to buy. This is part of why nvidia sells so well, its better to have 100% performance today then to have 100% performance 6 years from now.
Bruzzone - Thursday, November 10, 2022 - link
On the WW channel data last week R Vll (7 nm Vega) is the biggest AMD GPU mover between Radeon primary and secondary inventory available last week. mbKhanan - Thursday, November 10, 2022 - link
Not about optimizing, software needed to catch up to the forward looking architecture, because it only runs well in DX12 games. The only thing they really optimized aside from small percentage gains and new games, was DX11 performance and that was mainly for newer Radeon cards.To make it more clear, Vega and Fury X have a heck ton of shaders and they can only be fully utilized in low api games, DX11 maybe since the newest drivers and if you increase GPU util with a very high res. Has not that much to do with fine wine, unless you count architecture aging better into it, but finewine was about software not hardware aging.
Khanan - Thursday, November 10, 2022 - link
Yes, AMD is actually good since RX 6000 Gen, just not if you’re a RT Fan, at least not for 4K. 7000 Gen aims to fix that as well. FSR gets better and better too, they have just released version 2.2 and working on FSR 3.The interesting thing is, I would’ve loved to have FSR 2.1 for my GTX 1080 Ti back then end of 2020, and it was just integrated into CP2077 now and I had just not enough performance for that game back then. People forget that it profits a lot of users not just Radeon.
Khanan - Thursday, November 10, 2022 - link
And again you’re talking complete nonsense. Seems you’re a huge Nvidia fan that pretends to be neutral, while he absolutely is not. 6900 and 6950XT were easily competitive with Nvidias 3090s and even faster at lower resolution which a heck load of people care about, more than 4K. It’s only ray tracing like I said, that keeps Nvidia ahead. 4090 is a bit faster than 7900XTX in raster but it’s such a low amount that doesn’t really matter as both cards are easily fast enough for 4K high fps. It’s just ray tracing again, where Nvidia is ahead.And FSR 2.1 is easily competitive with DLSS as it has good enough quality, unless you’re a toxic Nvidia fan who can’t accept the reality of DLSS just not being that special.
Hresna - Saturday, November 5, 2022 - link
Article referred to a 4090 Ti… there’s no such card yet.I had read somewhere that amd confirmed RDNA 3 would have the same encode/decode engine but that seems not to be the case… will be interested to see if 10 bit 4:2:2 decode for h264/6 is included. Could be a game changer for many content creators…
Bruzzone - Tuesday, November 8, 2022 - link
7900XT_ N 31 at $999 and $899 I'm waiting to see the real price which I anticipate + 20% in the channel. 6900XT and 6800XT were offered at $999 and $640 which was low out the gate. Both minimally valued by the channel on a sustainable margin, the real price on application perfect pricing that is not an artificially low PR price by + 41% and 62% respectively. I anticipate N 31 priced at minimally AD FE MSRP if you can find one at that price. mbnandnandnand - Thursday, November 10, 2022 - link
We don't have a GPU crisis anymore. But the XTX at just $100 more than the XT does seem hard to believe. I think we'll see AIB models go crazy with power and clocks to justify higher prices. AMD has sandbagged the performance of the reference models but mentioned that the 7900 XT_ is designed to scale up to 3 GHz. Which is... 20% higher.Khanan - Thursday, November 10, 2022 - link
I suspect the AMD models to be near the msrp on the AMD shop if you can’t buy them in regular shops for that price with a PowerColor sticker on them. But that’s normal clocks then and not an extra huge and silent cooler. You pay more you get more.Oxford Guy - Friday, November 11, 2022 - link
Is AMD still selling blowers?Khanan - Sunday, November 13, 2022 - link
Not since RX 6000 Gen, their “made by amd” cards are actually pretty decent, have good acoustics, good cooling.evolucion8 - Thursday, November 10, 2022 - link
I recall that GCN was actually a TLP design, not ILP design like Evergreen and the Terascale architecture in previous generations. I think the issue with GCN was that the hardware schedule wasn't able to scale as well when going beyond 11 CUs due to internal architectural bottlenecks with instruction issue, internal bandwdith and the 4 shader per SE.RDNA 1 and 2 are convincingly TLP designs and moving away to a ILP design would require a major redesign of the compiler and the ISA language (RDNA can run under Wave64 used on GCN), so I doubt they would risk such a risky move as it means that their design would require some optimizations which would be from scratch. (Unless if this design keeps the Wave32 approach, or goes into Wave64 approach like GCN).
Khanan - Thursday, November 10, 2022 - link
The issue was that GCN forced Wave64 while RDNA and successors can do 32, aside from other optimizations and a more balanced architecture with more ROPs. GCN was simply not optimized for gaming while RDNA is.evolucion8 - Thursday, November 10, 2022 - link
Wave64 would leave too much bubbles but each SIMD was 64 instructions wide, couldn't take 32 like RDNA. GCN also had a terrible geometry bottleneck which only got fixed with Vega somewhat lol.occidental - Sunday, December 11, 2022 - link
Love to mate the XTX with my 7950X, currently sharing space with an Nvidia RTX A4000. However, I need vMix, Davinci Resolve Studio and OBS to properly leverage the encoders in the XTX, at launch. My life is high end corporate streaming and video editing, and a 4080 currently flies on all of my use cases. An extra $500 on a 4080, saves me $1000-2000/month in saved time rendering out 4k and greater footage.ryanhill - Tuesday, December 13, 2022 - link
Wow, I thought it will be more expensiveShmee - Saturday, December 24, 2022 - link
So, how about them reviews? :P