You guys really need a "corrections" link so the comments section isn't full of people pointing out typos and malapropisms (I'm guilty of the latter myself, though).
496 GB/s for $700. I'm curious to see a retrospective of GPU memory bandwidth vs. cost over the last ten years. It feels like it's really sat still compared to transistor count. Are GPU caches getting bigger? Even then there is little that can be done about the main memory bandwidth requirements of SIMD workloads. We have faster interconnects yet the buses are staying the same or getting smaller.
Bandwidth matters less now than it did many years ago thanks different types of compression being used. You can fit more data into the same amount of bandwidth now than you could years ago.
Lossy compression isn't free. At some point the user will say "this looks bad". If that wasn't the case then why not compress every 64x64 tile to 1 KB? It's dependent on the data's entropy and many textures are high entropy. It's nice to have tuneable control over a soft cap, but it isn't a magic bullet that makes things better. Lossless compression would be bad in this application. No one should make a system that imposes a maximum allowed entropy on artists.
Memory bandwidth always has and remains to be the bottleneck of SIMD systems.
Sadly I think you are right. While commendable AMD has always pushed higher memory capacities to the mainstream, their focus on memory bandwidth has never really paid off, and at a huge expense to die area for the larger memory controller, and obviously an energy efficiency deficit. This is why 3-channel memory was dropped in favor of reversion back to two channel with the Intel X58 chipset. It would be years before we would move beyond two channel again, and even then - quad channel never became mainstream.
The reason is simple. Even on single channel, Intel CPU’s especially show extraordinary memory performance. The controller is well optimized and cache hit rates are high. Likewise, Nvidia using excellent compression with optimized caches makes high memory bandwidth unnecessary.
SISD benefits greatly from caching and ILP. SIMD doesn’t need to run ILP to keep execution units busy so it chews through memory bandwidth by comparison. There is also quick diminishing returns on GPU cache size. GPUs have 20x the memory bandwidth of CPUs for a good reason: they use it.
Somewhat related to the subject of compression... adaptive resolution is by far the best graphics technology I have ever seen. Render at 1800p, drop down to 1400p when below the target framerate, and upscale everything to 4k. No need to buy the highest-end graphics card anymore. If we had adaptive resolution when Far Cry 1 came out, there would have been no market for the 6800, just use a 6600.
Combine with checkerboarding for console, which is impressive in its own right by NEAR-HALVING the workload. So render at half 1800p every other frame (equivalent of about 2300*1300 pixels, so 1.44x 1080p, not 4.0x) and get a generated 4k image.
Radeon VII has double the bandwidth for the same price but it doesn't really help performance at least in games. I think there has been more focus on effectively utilizing bandwidth because making the buses wider can get really expensive.
Hard to say . . . GDDR6 has a good deal of *theoretical* bandwidth on the table, there is the economical 'ghetto-HBM2' from Sammy, and HBM3 in the short-term.
We are likely to hear about Radeon **Navi-Instinct** pro cards this quarter, in addition to a Titan/Ampere 7nm HPC update. I'm thinking the trend will continue toward more efficient 'wider' bandwidth and advances in compression algorhtms, too.
It does kind of suck that this article/review seems more full fledged than the Navi article. Which on one hand is understandable given that that is an entirely new architecture, but it also comes off a bit unfortunate considering the 2080 Super minor spec bump appears to have warranted what is essentially a more complete article than the competitions big launch.
FWIW: The 2000 series ryzen article had errors due to testing methodology resulting in Intel's processors looking worse. Thus it took them a while to flesh it out because they spent a long time retesting everything and rebutting shill accusations. The ryzen 3000 series article was in perfect shape on time. So, I'm just not seeing this trend.
I still think it's interesting, though, that the highest Navi card we have is a x700 part. Traditionally, an x800/x80 card held the highest tier in AMD's lineups. You would think AMD named the 5700 cards intentionally, leaving room for both lesser and greater cards. A 5800 XT card would be very interesting.
*Except for the blower cooler a 5800 XT might come with. AMD's cooling solution this generation is terrible. How can you use less power than a 2070 Super, but have a higher core temperature AND higher noise?! Embarrassing engineering and product management effort. I wish board partners could have introduced their own custom cooling solutions at the 5700 launch.
The 5700's blower is not quite as bad as reviews show - noise measurements by most hardware sites are approximate at best, and the way they most of them measure tends to exaggerate differences.
That said, I'd be really surprised if they stick with a blower for the 5800. That would be a very bad decision.
Why not round these multi-hundred dollar prices? Show $499 as $500 etc. What value are you bringing to the reader by going along with the obfuscation? You should be simplifying where possible, to help rather than hinder comparisons. We don't expect 0.25% precision in frame rates, watts, or temperatures, and it doesn't help to see it in prices.
Um... yes, that is what Anandtech is doing. Obviously. But - unlike a retail outlet - they don't have to, and they can serve us better by *not* doing it. That was the entire point of my post.
So that's actually a really good question, and it's something I've been mulling around as well.
The issue on my end essentially comes down to accuracy versus usefulness. Round numbers are far more useful. But I also don't want to post inaccurate numbers, especially in a specification table. The card is $699, not $700. Which is totally a pricing trick meant to fool buyers; but at the end of the day it's still the price.
So let me flip things around here. You guys tell me: would you be okay if I listed a rounded price, even if it's not accurate?
Yes. Please do round. It takes effort for my eyes to recognize the 99 suffix and bump the leading digit in my head. Here in northern europe, it's almost impossible to get anything at the MSRP so the 99 number doesn't help me anyways.
I second that request. There's too much talk of the 2060 being the "cheapest RTX card" and not enough about whether it's actually a good experience. It would be helpful to know what the minimum investment is for a decent experience.
By synthetic, I'm assuming you mean compute? If so, the answer is yes. AMD has not dropped a major driver update for Navi since the launch, so nothing has changed.
The Division 2 4K 99th percentile results seem to be mislabeled (or there was something wrong in the test). The RX 5700XT and GTX 1080 are showing a higher 99th percentile value than the average.
That's NVIDIA's factory-overclocked card. $999 is supposed to be the MSRP for reference-clocked cards (but good luck getting one for under $1149 right now).
minor speed bump I consider anything 50Mhz range, not in the hundreds quicker IMHO
I personally feel like Nv still up to their dirty old deceitful tricks/ways and making excuses as to the why. if they were able to release as "super" this quickly for the price reduction "sort of speak" that says they very much should have done this right off the bat instead of making a song and dance about it, effectively screwing early adopters of their products (YET AGAIN) just to slap a faster version (slight less+cost so basically the same price for even faster)
A price reduction after release, I understand, a price reduction and a faster Overclocked version 3-7mth down the line to "freshen up" absolutely, but, a bait and switch (like the "new" nintendo switch same context, shafting of early adopters, which happens all the time, but in tech world Nv/Apple/Intel were/are notorious terrible at this.
All that being said, I wonder how fast a 2080 any version esp the Ti be WITHOUT the ray trace crap being shoved into it, i.e do "standard DX features/subfeatures that do not require proprietary hard/software"
I would imagine the transistor budget they used for the RT cores were likely more as a direct result of them cleaning and chopping as they had since the GTX 500 generation (basically) to get the speed up and power use down.
By them gutting and re-arrange the transistors etc they were left with a bunch they could NOT use for much of anything else (or power would go up and speed would go down type thing) so they settled on things like Gsync, all the various Nvidia "game" features (shadow play, Raytrace etc etc)
anyways. would be cool if they did in fact offer 2 versions for each of these, 1 with and 1 without ray tracing, likely the non RT would be quite a bit faster and similar reduction in the power from not having to power extra "junk in the trunk"
in this, AMD would do very well to not worry about all that extra crud just because someone else is, focus on speed and power everything else is "old news"
features such as Raytracing the way they do it is a self defeating sales pitch, basically "here is a cup of water, in 1 second you lose X of that water and owe me Z more for the remaining amount, better hurry up as the next cup of water is changing it's internal design a wee bit to deliver the water in fancier ways so it will appear to be better water but in fact is less of the water you actually want as they had to make room for and price in the fancy cup design which has only ONE purpose compared to the cup itself which could just be made bigger with a smaller exit to ensure water is there forever"
I will tell you a story about rereleasing and effectively screwing early adopters of products.. aka AMD REBRANDEON wit their Rx200 and 300 series and then another rebrand AMD REBRANDEON RX 480, RX580 and the final nail in the coffin AMD REBRANDEON RX 590.
Nvidia did this too. GTX 1060 6GB, 3GB, GTX 10603GB, but with less cuda cores, GT1030 DDR4, GT1030 GDDR5 (Which both had the same model number, GT1030).
What are you smoking? They aren't different at all. Slight tweaks in the number of CUDA and RT Cores sure, but that's about it. Evidence that overclocking the 2080 gets you on par in the 2080 Super. It's literally the same architecture, minus a few CUDA cores more. Please show me technical details as to how these chips are different, or do you also believe Intel's 9th gen is vastly different than 8th gen?
LMAFAO dude they are the same exact chips as the launch cards. The only difference is Nvidia has not fused off as many cuda cores this time around and in the 2080 Super's case you are finally getting the 2080 that should have been launched on release day with the full core enabled. The biggest change here is finally offering the 2060S with 8GB memory because 6GB on the older 1060 and 2060 is going to become a problem at some point.
You sir , are an idiot. they are the same identical chips. Only difference is that they have less cut down areas (which is no surprising if they improved yields).
Xyler94/rocky12345/tamalero dont waste you time with maxiking, his hatred and bias against AMD will blind him to any thing but what he sees/types. as you can see a few messages down when he " claims " amd's video card business constantly loses money, but in actuality, its kept them alive long enough while their cpu side, wasnt doing so was as the 2 people who replied, posted.
I had the R9 390x and yes it was based on the 290x but for the price I paid and the performance it gave me up until a month ago because I upgraded it was a decent product.
So basically Nvidia just did a rebrand as well with the Super cards. Pretty much the same cards with more of the chip enabled this time around. Some could argue these Super cards are what Nvidia should have released 1 year ago and not the cut down cards they actually released with a small performance bump over the 10 series or at least not the performance bump everyone was expecting. If Nvidia would have released these Super cards at launch the only thing people would have complained about was the price gouge & even then to a lesser degree because the performance would have been a bit higher than the launch cards back then.
If you're insisting that the RX590 is a rebrand, then congratulations, so are the Super cards - because Nvidia have done the exact same thing here (wait for yields and consistency to improve and use that to edge out a little more performance from the same silicon).
You can spot the Nvidia shills because they show up yelling as though only AMD do rebrands, and as if a rebrand is somehow a form of robbery. Fact is it's been something both companies have done for decades now. Pretty sure G92 was the first product to go through three names (8800 GTS 512MB / 9800 GTX / GTS 240) and Nvidia's low-end products are even more prone to this.
FWIW: AMD's 500 series was clocked higher because the process had been improved, not because AMD wanted to make life worse for us by sandbagging on the 400 series. The 590 was just a cheap way to place something in between the 580 and Vega 56 and it was on a smaller node, so it's not *just* a rebrand. The RX 200 and 300 series were mostly rebrands, but if I'm an early adopter, how much trouble is it to check and find out that a card is just a rebrand and then not buy?
These cards exist because yields got better since the architecture was launched last year. Not because of some sort of conspiracy by Nvidia to trick people into upgrading already. It's not bait-and-switch, mostly because you don't know the definition of bait-and-switch.
You don't need a card "without the ray trace crap being shoved in" because in games without RTX features, the RT cores don't do anything to harm performance in any way. The RTX 2080 performs essentially the same as a theoretical GTX 2080.
I won't address the rest of your tirade because it's clear you're just angry that Nvidia didn't come ask you personally what you wanted in a video card. While RTX hasn't been exactly a success, we should be encouraging Nvidia and AMD to find ways to improve game visuals besides higher resolution
They might need to have stocked up a sufficient number of chips of a certain quality in order to satisfy demand. It's little good announcing a card that renders another card redundant and not having enough chips to sell. You'll just get people buying neither of those cards.
Given the performance of AMD, it doesn't look like NVIDIA had to release anything. Their top card Radeon VII is supposed to be EOL. Looks like AMD are still far behind. Anything good on the horizon from AMD?
They are behind at the highest end, but they're competitive in the mid-range which is where all the money is for them, so not a big deal when you look at it like that. They also power the current gen xb1/ps4 and next gen.
Radeon isn't operating at a loss. Apple reportedly paid a part of Vega development and they needed a gpu for their apus (i.e. all mobile parts). Vega is great at compute, earning AMD some extra revenue. Google placed a large order. Sony and Microsoft paid a part of Navi development and placed a huge order. Navi will be used in apus once again, bringing in some revenue. Desktop gpus is not AMDs only gpu market, they need the same development for their other divisions.
Essentially the GPU-division tided them over when the CPU-division didn't deliver, the new CPU cores where prioritized over developing the new GPU architecture for several years so it's no wonder they are behind on the GPU-side. Hopefully they will catch up for real in the next couple of years with the increased revenue flow put to good use.
LOL to funny I guess when a mid range card like the 5700XT was able to almost match the performance of AMD's top card that cost a lot more than the 5700XT AMD has a real problem there for sure. The only coars eof action for AMD was to remove the Radeon 7 form the picture as it served no more purpose other than triggering reviewers after the 5700 cards came out.
Now if I was Nvidia I would be some what worried as to what AMD has on the books and what the next move is going to be. If the mid range cards like the 5700's can topple AMD's top card and hang in there and beat Nvidia's mid top cards what will the Navi 20 chip be able to do. You have to know AMD is planning on doing to Nvidia what they have been doing to Intel right.
It has been proven and told by AMD themselves that they played Nvidia by showing higher launch prices as well as down playing the performance a bit to see how Nvidia would respond. I guess they had a good laugh after seeing Nvidia start the Super marketing and a really rushed launch to beat AMD to the punch line just to find out AMD played them in every way and lowered the prices just before launch and the performance was better than what they had shown in their slides by about 5%-7%.
Have you even looked at the reviews. The 5700 creams the 2060 and the 5700XT creams the 2070. The 5700 is slightly faster than the 2060S and the 5700XT almost catches the 2070S and with driver tweaks it probably will get faster than a 2070S because of the Navi's being a totally new card lineup. Like I said if small Navi is this fast you know Nvidia is worried about big Navi which is coming very soon as well. Rest assured though Nvidia will have something to compete with big Navi for sure they will never settle for being second best or not the fastest...their CEO's ego could not live with that.
That's a lot of money to pay off, and you don't get there by releasing second best cards for several generations.
But don't worry, NVIDIA has three tricks they can play: 1) Smaller process (either 10nm or 7nm) will give them the ability to drop voltage, power consumption, and die size, all of which reduce the cost of the part and improve profits at the low end of the scale. 2) Smaller process also gives them the headroom to boost clock and add more functional units without driving up power consumption, making their middle end competitive 3) Smaller process also give them more room to design bigger GPUs, which means they can keep releasing their Ti parts
All three means, of course, that even if they did nothing but die shrink, boost clocks, and increase the number of units, that they will remain competitive for another two years, on top of architectural changes.
Why hasn't NVIDA begun to sell 7nm cards? Does AMD have a time-limited exclusivity contract with TSMC? Or will it wait untill AMD launch cards faster than its own?
Largely because they don't have to. My guess is that NVIDIA has quite a number of 12 nm FF dies in stock (probably a lot, thanks to the crypto craze) and are now selling them before they start the next generation .
I'm sure part of it is their existing contract, as opposed to an exclusivity contract. 12nm is a much more mature process, and in that regard it makes sense when trying to make a large part to use a proven process. 7nm wasn't available when NVIDIA was designing their RTX parts, so there was no way to estimate yield or improvements over time in 2018 (when they released the RTX parts)
Now that the process is over a year old I'm sure they are working on a refreshed design to reduce power, increase clock, and add more or new functional units next year.
Nvidia headmaster Also said in one interview that They can make 12nm chips much cheaper than 7nm chips... and that is good reason not to go for the newest new production technology.
Translation is 12nm is good enough right now we do not want to invest in 7nm as in not worth it because our power usage numbers are good enough right now.
AMD on the other hand OMG our power numbers are through the roof we need a new node ASAP and that has worked out for them very well this time around. If Navi was on Global 14nm or 12nm the power usage would be insanely high for sure. TSMC's process is just better for GPU's than Global's. On the other hand TSMC's process not so good for CPU's as seen in Ryzen 3000 series and the lack luster clock speeds. Good thing those CPU's have a lto in them to make up for the lack of clock speed and they still perform like they are running at a higher speed than they are.
TSMC 7nm HPC node is more optimized for power usage than GF 12nm. The same can be said about Intel 10nm compared to Intel 14nm. That said, TSMC 7nm is not far behind GF 12nm in performance.
IIRC this is a fairly traditional pattern with AMD/ATI being more aggressive about moving to new processes early on while NVidia waits until they're more mature.
Is there a specific test you'd like to see? NVENC is a fixed function block, so it's not something I normally look at in individual video card reviews.
Thanks Ryan, appreciate you considering it! I am particularly interested in NVENC performance in encoding or transcoding to HEVC a 2160p video using ffMPEG with high quality (so, not NVENC default) settings, best at 10bit HDR. The clip currently used for handbrake tests of CPUs might serve for initial testing if it's captured in HDR. For gamers, it might be of interest to test this function to capture 1440p or 1080p gaming using settings appropriate for streaming. I haven't done that myself (yet), but believe some people here might provide suggestions.
Forgot: some key questions are what, if any, difference in NVENC performance exists between cards, and (for gaming) , if and how using NVENC affects gaming. I am mostly interested in the first. Thanks!
If I were testing this I'd go for 10bit for x265 as suggested, but I'd test both the (8 bit) x264 and x265 (HEVC) codecs. The size of the video should be selected to allow many cards to compete. I'm guessing this would be 1440p for x265 and 1080p for x264.
Fixed-function logic is basically an asic (applications specific ic). It can only do one task, but it does the task well. It uses die area, and therefore increases manufacturing costs, but it shouldn't affect gpu performance in other tasks.
Considering the 99th PCTL fps values at 1440p: Tomb Raider: 42, Metro: 35, Division: 43 I wouldn't necessarily call the 2080 Super overpowered for 1440p.
No GTX 1080 TI? 1080 TI / 2080 / 2080 Super / 2080 TI are the only comparisons I wanted to see. I see it's not available in Bench, either. :( Please add it.
A real shame the 1080 Ti isn't in the benchmarking graphs, it's clearly the competitor card to the 2080 and 2080 Super. It would show just how little NVIDIA has done in giving us better performance to price in just over 2 years
It needs retested first. Look at Bench, it's not in the 2019 GPU bench yet; hopefully this is just a case of not gotten to it yet (2019 GPU bench is still fairly sparse vs 2018) and not a case of the card being assigned to someone other than Ryan and thus not being available for him to cycle through the GPU benchmark box.
Good review...thanks. I'm assuming that the most important aspect of the Super cards i.e. the ray tracing perf, is very similar going from RTX2080 TO RTX2080 Super? ;)
single digit improvement while still being overpriced as the original 2080 Where I live the original 2080 still cost over 700 usd while the super are about 150 bucks more... maybe they are a lot cheaper in the states
"However it's still requires more power than the RTX 2080 vanilla, ..." Excess "s": "However it still requires more power than the RTX 2080 vanilla, ..."
"...which its much better performance-per-dollar ratio." "With", not "which": "...with its much better performance-per-dollar ratio."
So Nvidia now thinks 8 GB is enough from 2060S through 2080S? Sure sure. But I guess there are still enough victims who will buy this overpriced and not future proof crap.
I have no idea why anyone would pay the Nvidia tax with the Radeon Navi stuff out.
Right now I am using the 5700 XT and while it is a little loud, it matches up really well with the 2070 Super for $100 less. RTX specific stuff is virtually non existent.
the argument, seems to be ray tracing above all else... nvidia has it, amd doesnt, so why buy amd ? i personally.. dont care about ray tracing, as the games i play currently, dont have it, and i dont see any games i want to to play in the future, having it as well, and the fact, that the rtx series, is priced way out of my price range
Buying GPU which is a little loud is and keep it for 1-2 years is not exactly super pleasing especially in room with somebody else. I'd pay $100 extra only to not hear loud GPU.
So you have decided to pay the AMD tax from your ears, instead of the Nvidia tax from your pocket. Fair enough. Enjoy your fewer features and driver bugs. :P
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
111 Comments
Back to Article
willis936 - Tuesday, July 23, 2019 - link
I think there is an error on the first page comparison table: 2080 Ti memory clock.Also first I guess.
willis936 - Tuesday, July 23, 2019 - link
Also the last paragraph of the conclusion should have "barring" rather than "baring".Ryan Smith - Tuesday, July 23, 2019 - link
This is what happens when you get overeager with copying & pasting... Thanks!extide - Tuesday, July 23, 2019 - link
and you say ending the bundle when I think you mean extendingRyan Smith - Tuesday, July 23, 2019 - link
The fault with that one lies solely with Word!boozed - Tuesday, July 23, 2019 - link
You guys really need a "corrections" link so the comments section isn't full of people pointing out typos and malapropisms (I'm guilty of the latter myself, though).Cheers for the review
RSAUser - Wednesday, July 24, 2019 - link
They rather need a grammarly subscription.willis936 - Tuesday, July 23, 2019 - link
496 GB/s for $700. I'm curious to see a retrospective of GPU memory bandwidth vs. cost over the last ten years. It feels like it's really sat still compared to transistor count. Are GPU caches getting bigger? Even then there is little that can be done about the main memory bandwidth requirements of SIMD workloads. We have faster interconnects yet the buses are staying the same or getting smaller.Stuka87 - Tuesday, July 23, 2019 - link
Bandwidth matters less now than it did many years ago thanks different types of compression being used. You can fit more data into the same amount of bandwidth now than you could years ago.willis936 - Tuesday, July 23, 2019 - link
Lossy compression isn't free. At some point the user will say "this looks bad". If that wasn't the case then why not compress every 64x64 tile to 1 KB? It's dependent on the data's entropy and many textures are high entropy. It's nice to have tuneable control over a soft cap, but it isn't a magic bullet that makes things better.Lossless compression would be bad in this application. No one should make a system that imposes a maximum allowed entropy on artists.
Memory bandwidth always has and remains to be the bottleneck of SIMD systems.
Cellar Door - Tuesday, July 23, 2019 - link
The delta compression used by Nvidia is loseless.notashill - Tuesday, July 23, 2019 - link
If memory bandwidth was "the" bottleneck then the Radeon VII would be the fastest consumer level GPU on the market by an enormous margin.Samus - Tuesday, July 23, 2019 - link
Sadly I think you are right. While commendable AMD has always pushed higher memory capacities to the mainstream, their focus on memory bandwidth has never really paid off, and at a huge expense to die area for the larger memory controller, and obviously an energy efficiency deficit. This is why 3-channel memory was dropped in favor of reversion back to two channel with the Intel X58 chipset. It would be years before we would move beyond two channel again, and even then - quad channel never became mainstream.The reason is simple. Even on single channel, Intel CPU’s especially show extraordinary memory performance. The controller is well optimized and cache hit rates are high. Likewise, Nvidia using excellent compression with optimized caches makes high memory bandwidth unnecessary.
willis936 - Tuesday, July 23, 2019 - link
SISD benefits greatly from caching and ILP. SIMD doesn’t need to run ILP to keep execution units busy so it chews through memory bandwidth by comparison. There is also quick diminishing returns on GPU cache size. GPUs have 20x the memory bandwidth of CPUs for a good reason: they use it.flyingpants265 - Monday, July 29, 2019 - link
Somewhat related to the subject of compression... adaptive resolution is by far the best graphics technology I have ever seen. Render at 1800p, drop down to 1400p when below the target framerate, and upscale everything to 4k. No need to buy the highest-end graphics card anymore. If we had adaptive resolution when Far Cry 1 came out, there would have been no market for the 6800, just use a 6600.Combine with checkerboarding for console, which is impressive in its own right by NEAR-HALVING the workload. So render at half 1800p every other frame (equivalent of about 2300*1300 pixels, so 1.44x 1080p, not 4.0x) and get a generated 4k image.
notashill - Tuesday, July 23, 2019 - link
Radeon VII has double the bandwidth for the same price but it doesn't really help performance at least in games. I think there has been more focus on effectively utilizing bandwidth because making the buses wider can get really expensive.Smell This - Tuesday, July 23, 2019 - link
Hard to say . . .GDDR6 has a good deal of *theoretical* bandwidth on the table, there is the economical 'ghetto-HBM2' from Sammy, and HBM3 in the short-term.
We are likely to hear about Radeon **Navi-Instinct** pro cards this quarter, in addition to a Titan/Ampere 7nm HPC update. I'm thinking the trend will continue toward more efficient 'wider' bandwidth and advances in compression algorhtms, too.
wr3zzz - Tuesday, July 23, 2019 - link
How are these new cards draw so much more power than GTX980 under load yet have lower load temperature and noise? Are the new fans that good?Ryan Smith - Tuesday, July 23, 2019 - link
Blower versus open air (axial) cooler.Betonmischer - Tuesday, July 23, 2019 - link
Absolutely, if you compare against the reference blower that Nvidia used prior to RTX 20.Gemuk - Tuesday, July 23, 2019 - link
First page, second last paragraph: "Meanwhile the RTX 2070 is definitely the spoiler to NVIDIA’s stack;"I'm sure you meant RTX 2070 Super instead.
Page 2, comparison table: $699 for 2080 Super reference instead of $499.
Nice review as always. Any updates to the fleshing out of the Navi article?
Ryan Smith - Tuesday, July 23, 2019 - link
"Any updates to the fleshing out of the Navi article?"About half-done. Once I am finished, it'll both go into the article, and be posted separately.
PHlipMoD3 - Tuesday, July 23, 2019 - link
It does kind of suck that this article/review seems more full fledged than the Navi article. Which on one hand is understandable given that that is an entirely new architecture, but it also comes off a bit unfortunate considering the 2080 Super minor spec bump appears to have warranted what is essentially a more complete article than the competitions big launch.29a - Wednesday, July 24, 2019 - link
They tend to treat AMD with less respect. look at the release of every Ryzen article, it took months to get the first one done.ballsystemlord - Saturday, July 27, 2019 - link
FWIW: The 2000 series ryzen article had errors due to testing methodology resulting in Intel's processors looking worse. Thus it took them a while to flesh it out because they spent a long time retesting everything and rebutting shill accusations. The ryzen 3000 series article was in perfect shape on time. So, I'm just not seeing this trend.edzieba - Tuesday, July 23, 2019 - link
The "Performance Summary" table on the Conclusion page has RTX 1080 (presumably GTX 1080).DanNeely - Tuesday, July 23, 2019 - link
Last page typo, in the comparison table you have "RTX 2080 Super vs. RTX 1080" GTX 1080?Moizy - Tuesday, July 23, 2019 - link
I still think it's interesting, though, that the highest Navi card we have is a x700 part. Traditionally, an x800/x80 card held the highest tier in AMD's lineups. You would think AMD named the 5700 cards intentionally, leaving room for both lesser and greater cards. A 5800 XT card would be very interesting.Moizy - Tuesday, July 23, 2019 - link
*Except for the blower cooler a 5800 XT might come with. AMD's cooling solution this generation is terrible. How can you use less power than a 2070 Super, but have a higher core temperature AND higher noise?! Embarrassing engineering and product management effort. I wish board partners could have introduced their own custom cooling solutions at the 5700 launch.Spunjji - Friday, July 26, 2019 - link
The 5700's blower is not quite as bad as reviews show - noise measurements by most hardware sites are approximate at best, and the way they most of them measure tends to exaggerate differences.That said, I'd be really surprised if they stick with a blower for the 5800. That would be a very bad decision.
Stuka87 - Tuesday, July 23, 2019 - link
We already know that big Navi is expected early next year. It will fill in that 5800 spot. Small Navi comes in 1-2 months.Arbie - Tuesday, July 23, 2019 - link
Why not round these multi-hundred dollar prices? Show $499 as $500 etc. What value are you bringing to the reader by going along with the obfuscation? You should be simplifying where possible, to help rather than hinder comparisons. We don't expect 0.25% precision in frame rates, watts, or temperatures, and it doesn't help to see it in prices.quorm - Tuesday, July 23, 2019 - link
Because they are reporting MSRP set by the manufacturer, and the manufacturer sets prices ending in 99.Arbie - Tuesday, July 23, 2019 - link
Um... yes, that is what Anandtech is doing. Obviously. But - unlike a retail outlet - they don't have to, and they can serve us better by *not* doing it. That was the entire point of my post.Arbie - Tuesday, July 23, 2019 - link
Especially in the comparison tables.Ryan Smith - Tuesday, July 23, 2019 - link
So that's actually a really good question, and it's something I've been mulling around as well.The issue on my end essentially comes down to accuracy versus usefulness. Round numbers are far more useful. But I also don't want to post inaccurate numbers, especially in a specification table. The card is $699, not $700. Which is totally a pricing trick meant to fool buyers; but at the end of the day it's still the price.
So let me flip things around here. You guys tell me: would you be okay if I listed a rounded price, even if it's not accurate?
SuperiorSpecimen - Tuesday, July 23, 2019 - link
How about in the specs/pricing charts show the accurate price, but when referring to price in the body of the article, go with the useful number?DanNeely - Wednesday, July 24, 2019 - link
ThisTilmitt - Wednesday, July 24, 2019 - link
Please round!igavus - Wednesday, July 24, 2019 - link
Yes. Please do round. It takes effort for my eyes to recognize the 99 suffix and bump the leading digit in my head. Here in northern europe, it's almost impossible to get anything at the MSRP so the 99 number doesn't help me anyways.GreenReaper - Wednesday, July 24, 2019 - link
The price is the price. If you want to help clarify the relative prices, it'd be better to use a visual aid.Spunjji - Friday, July 26, 2019 - link
Entirely in favour of rounding, here - I think SuperiorSpecimen has the right idea about how to do it.hosps - Tuesday, July 23, 2019 - link
Any chance in seeing an RTX enabled comparison between the various card levels?Spunjji - Friday, July 26, 2019 - link
I second that request. There's too much talk of the 2060 being the "cheapest RTX card" and not enough about whether it's actually a good experience. It would be helpful to know what the minimum investment is for a decent experience.DanNeely - Tuesday, July 23, 2019 - link
Are the synthetic tests without a 5700 score still due to lingering driver problems, or have you just not had time to try testing again?Ryan Smith - Tuesday, July 23, 2019 - link
By synthetic, I'm assuming you mean compute? If so, the answer is yes. AMD has not dropped a major driver update for Navi since the launch, so nothing has changed.Ferrari_Freak - Tuesday, July 23, 2019 - link
The Division 2 4K 99th percentile results seem to be mislabeled (or there was something wrong in the test). The RX 5700XT and GTX 1080 are showing a higher 99th percentile value than the average.Ryan Smith - Tuesday, July 23, 2019 - link
Whoops. Fixed that in the source data, but it didn't propagate to the graphs. It's fully fixed this time.designgears - Tuesday, July 23, 2019 - link
https://www.nvidia.com/en-us/geforce/graphics-card...That shows MSRP at $1199.00, not $999.00.
Ryan Smith - Tuesday, July 23, 2019 - link
That's NVIDIA's factory-overclocked card. $999 is supposed to be the MSRP for reference-clocked cards (but good luck getting one for under $1149 right now).designgears - Tuesday, July 23, 2019 - link
Oooh right, forgot about that.Dragonstongue - Tuesday, July 23, 2019 - link
minor speed bump I consider anything 50Mhz range, not in the hundreds quicker IMHOI personally feel like Nv still up to their dirty old deceitful tricks/ways and making excuses as to the why. if they were able to release as "super" this quickly for the price reduction "sort of speak" that says they very much should have done this right off the bat instead of making a song and dance about it, effectively screwing early adopters of their products (YET AGAIN) just to slap a faster version (slight less+cost so basically the same price for even faster)
A price reduction after release, I understand, a price reduction and a faster Overclocked version 3-7mth down the line to "freshen up" absolutely, but, a bait and switch (like the "new" nintendo switch same context, shafting of early adopters, which happens all the time, but in tech world Nv/Apple/Intel were/are notorious terrible at this.
All that being said, I wonder how fast a 2080 any version esp the Ti be WITHOUT the ray trace crap being shoved into it, i.e do "standard DX features/subfeatures that do not require proprietary hard/software"
I would imagine the transistor budget they used for the RT cores were likely more as a direct result of them cleaning and chopping as they had since the GTX 500 generation (basically) to get the speed up and power use down.
By them gutting and re-arrange the transistors etc they were left with a bunch they could NOT use for much of anything else (or power would go up and speed would go down type thing) so they settled on things like Gsync, all the various Nvidia "game" features (shadow play, Raytrace etc etc)
anyways. would be cool if they did in fact offer 2 versions for each of these, 1 with and 1 without ray tracing, likely the non RT would be quite a bit faster and similar reduction in the power from not having to power extra "junk in the trunk"
in this, AMD would do very well to not worry about all that extra crud just because someone else is, focus on speed and power everything else is "old news"
features such as Raytracing the way they do it is a self defeating sales pitch, basically "here is a cup of water, in 1 second you lose X of that water and owe me Z more for the remaining amount, better hurry up as the next cup of water is changing it's internal design a wee bit to deliver the water in fancier ways so it will appear to be better water but in fact is less of the water you actually want as they had to make room for and price in the fancy cup design which has only ONE purpose compared to the cup itself which could just be made bigger with a smaller exit to ensure water is there forever"
Maxiking - Tuesday, July 23, 2019 - link
I will tell you a story about rereleasing and effectively screwing early adopters of products.. aka AMD REBRANDEON wit their Rx200 and 300 series and then another rebrand AMD REBRANDEON RX 480, RX580 and the final nail in the coffin AMD REBRANDEON RX 590.Xyler94 - Tuesday, July 23, 2019 - link
Nvidia did this too. GTX 1060 6GB, 3GB, GTX 10603GB, but with less cuda cores, GT1030 DDR4, GT1030 GDDR5 (Which both had the same model number, GT1030).Even the Super's are rebranded...
Maxiking - Tuesday, July 23, 2019 - link
Supers are not rebranded, different chips. But nice try. 5/7Xyler94 - Tuesday, July 23, 2019 - link
What are you smoking? They aren't different at all. Slight tweaks in the number of CUDA and RT Cores sure, but that's about it. Evidence that overclocking the 2080 gets you on par in the 2080 Super. It's literally the same architecture, minus a few CUDA cores more. Please show me technical details as to how these chips are different, or do you also believe Intel's 9th gen is vastly different than 8th gen?rocky12345 - Tuesday, July 23, 2019 - link
LMAFAO dude they are the same exact chips as the launch cards. The only difference is Nvidia has not fused off as many cuda cores this time around and in the 2080 Super's case you are finally getting the 2080 that should have been launched on release day with the full core enabled. The biggest change here is finally offering the 2060S with 8GB memory because 6GB on the older 1060 and 2060 is going to become a problem at some point.tamalero - Wednesday, July 24, 2019 - link
You sir , are an idiot. they are the same identical chips. Only difference is that they have less cut down areas (which is no surprising if they improved yields).Korguz - Thursday, July 25, 2019 - link
Xyler94/rocky12345/tamalero dont waste you time with maxiking, his hatred and bias against AMD will blind him to any thing but what he sees/types. as you can see a few messages down when he " claims " amd's video card business constantly loses money, but in actuality, its kept them alive long enough while their cpu side, wasnt doing so was as the 2 people who replied, posted.rocky12345 - Tuesday, July 23, 2019 - link
I had the R9 390x and yes it was based on the 290x but for the price I paid and the performance it gave me up until a month ago because I upgraded it was a decent product.So basically Nvidia just did a rebrand as well with the Super cards. Pretty much the same cards with more of the chip enabled this time around. Some could argue these Super cards are what Nvidia should have released 1 year ago and not the cut down cards they actually released with a small performance bump over the 10 series or at least not the performance bump everyone was expecting. If Nvidia would have released these Super cards at launch the only thing people would have complained about was the price gouge & even then to a lesser degree because the performance would have been a bit higher than the launch cards back then.
Korguz - Tuesday, July 23, 2019 - link
Maxiking, wow you hate amd.. but to be fair, nvidia is also guilty for the re brands as well, probably not as bad as amd.. but still bad :600 series
510 -> 605 (Fermi GF119)
GT520 -> GT610, GT620 (OEM), 705 (Fermi GF119)
GT530 -> GT620 (retail) (Fermi GF119)
GT440 (DDR3) -> GT630 (DDR3), GT730 (DDR3, 128-bit) (Fermi GF108)
GT440 (GDDR5) -> GT630 (GDDR5) (Fermi GF108)
GT545 (DDR3) -> GT640 (OEM) (Fermi GF116)
GTX560 SE (OEM) -> GT645 (Fermi GF114-400-A1)
700 series
GT630 (Kepler) -> GT740 (Kepler GK107)
GT630 (Kepler rev 2) -> GT710, GT720, GT730 (128-bit & GDDR5) (Kepler GK208)
210 -> 405 (OEM) (Tesla GT218)
GTX680 -> GTX770 (Kepler GK104)
but i am sure you will find some way to refute and ignore this fact.. but what ever man.
Spunjji - Friday, July 26, 2019 - link
If you're insisting that the RX590 is a rebrand, then congratulations, so are the Super cards - because Nvidia have done the exact same thing here (wait for yields and consistency to improve and use that to edge out a little more performance from the same silicon).You can spot the Nvidia shills because they show up yelling as though only AMD do rebrands, and as if a rebrand is somehow a form of robbery. Fact is it's been something both companies have done for decades now. Pretty sure G92 was the first product to go through three names (8800 GTS 512MB / 9800 GTX / GTS 240) and Nvidia's low-end products are even more prone to this.
ballsystemlord - Saturday, July 27, 2019 - link
FWIW: AMD's 500 series was clocked higher because the process had been improved, not because AMD wanted to make life worse for us by sandbagging on the 400 series. The 590 was just a cheap way to place something in between the 580 and Vega 56 and it was on a smaller node, so it's not *just* a rebrand. The RX 200 and 300 series were mostly rebrands, but if I'm an early adopter, how much trouble is it to check and find out that a card is just a rebrand and then not buy?jordanclock - Tuesday, July 23, 2019 - link
That is a lot of words to be wrong.These cards exist because yields got better since the architecture was launched last year. Not because of some sort of conspiracy by Nvidia to trick people into upgrading already. It's not bait-and-switch, mostly because you don't know the definition of bait-and-switch.
You don't need a card "without the ray trace crap being shoved in" because in games without RTX features, the RT cores don't do anything to harm performance in any way. The RTX 2080 performs essentially the same as a theoretical GTX 2080.
I won't address the rest of your tirade because it's clear you're just angry that Nvidia didn't come ask you personally what you wanted in a video card. While RTX hasn't been exactly a success, we should be encouraging Nvidia and AMD to find ways to improve game visuals besides higher resolution
GreenReaper - Wednesday, July 24, 2019 - link
They might need to have stocked up a sufficient number of chips of a certain quality in order to satisfy demand. It's little good announcing a card that renders another card redundant and not having enough chips to sell. You'll just get people buying neither of those cards.YB1064 - Tuesday, July 23, 2019 - link
Given the performance of AMD, it doesn't look like NVIDIA had to release anything. Their top card Radeon VII is supposed to be EOL. Looks like AMD are still far behind. Anything good on the horizon from AMD?Maxiking - Tuesday, July 23, 2019 - link
Yeah, that fabricated 4.7ghz boost on Ryzen 3950x.designgears - Tuesday, July 23, 2019 - link
They are behind at the highest end, but they're competitive in the mid-range which is where all the money is for them, so not a big deal when you look at it like that. They also power the current gen xb1/ps4 and next gen.Maxiking - Tuesday, July 23, 2019 - link
Obviously the money aren't there, the gpu division is constantly operating at a loss.Rudde - Thursday, July 25, 2019 - link
Radeon isn't operating at a loss. Apple reportedly paid a part of Vega development and they needed a gpu for their apus (i.e. all mobile parts). Vega is great at compute, earning AMD some extra revenue. Google placed a large order.Sony and Microsoft paid a part of Navi development and placed a huge order. Navi will be used in apus once again, bringing in some revenue.
Desktop gpus is not AMDs only gpu market, they need the same development for their other divisions.
Zoolook - Thursday, July 25, 2019 - link
Essentially the GPU-division tided them over when the CPU-division didn't deliver, the new CPU cores where prioritized over developing the new GPU architecture for several years so it's no wonder they are behind on the GPU-side. Hopefully they will catch up for real in the next couple of years with the increased revenue flow put to good use.rocky12345 - Tuesday, July 23, 2019 - link
LOL to funny I guess when a mid range card like the 5700XT was able to almost match the performance of AMD's top card that cost a lot more than the 5700XT AMD has a real problem there for sure. The only coars eof action for AMD was to remove the Radeon 7 form the picture as it served no more purpose other than triggering reviewers after the 5700 cards came out.Now if I was Nvidia I would be some what worried as to what AMD has on the books and what the next move is going to be. If the mid range cards like the 5700's can topple AMD's top card and hang in there and beat Nvidia's mid top cards what will the Navi 20 chip be able to do. You have to know AMD is planning on doing to Nvidia what they have been doing to Intel right.
It has been proven and told by AMD themselves that they played Nvidia by showing higher launch prices as well as down playing the performance a bit to see how Nvidia would respond. I guess they had a good laugh after seeing Nvidia start the Super marketing and a really rushed launch to beat AMD to the punch line just to find out AMD played them in every way and lowered the prices just before launch and the performance was better than what they had shown in their slides by about 5%-7%.
Have you even looked at the reviews. The 5700 creams the 2060 and the 5700XT creams the 2070. The 5700 is slightly faster than the 2060S and the 5700XT almost catches the 2070S and with driver tweaks it probably will get faster than a 2070S because of the Navi's being a totally new card lineup. Like I said if small Navi is this fast you know Nvidia is worried about big Navi which is coming very soon as well. Rest assured though Nvidia will have something to compete with big Navi for sure they will never settle for being second best or not the fastest...their CEO's ego could not live with that.
michael2k - Wednesday, July 24, 2019 - link
You are way too emotionally invested if you think the CEO's ego has anything to do with product design.Paying for their second new HQ? That's NVIDIA's reason for never settling for second best:
https://www.digitaltrends.com/computing/nvidia-cam...
That's a lot of money to pay off, and you don't get there by releasing second best cards for several generations.
But don't worry, NVIDIA has three tricks they can play:
1) Smaller process (either 10nm or 7nm) will give them the ability to drop voltage, power consumption, and die size, all of which reduce the cost of the part and improve profits at the low end of the scale.
2) Smaller process also gives them the headroom to boost clock and add more functional units without driving up power consumption, making their middle end competitive
3) Smaller process also give them more room to design bigger GPUs, which means they can keep releasing their Ti parts
All three means, of course, that even if they did nothing but die shrink, boost clocks, and increase the number of units, that they will remain competitive for another two years, on top of architectural changes.
Silma - Tuesday, July 23, 2019 - link
Why hasn't NVIDA begun to sell 7nm cards?Does AMD have a time-limited exclusivity contract with TSMC?
Or will it wait untill AMD launch cards faster than its own?
eastcoast_pete - Tuesday, July 23, 2019 - link
Largely because they don't have to. My guess is that NVIDIA has quite a number of 12 nm FF dies in stock (probably a lot, thanks to the crypto craze) and are now selling them before they start the next generation .michael2k - Tuesday, July 23, 2019 - link
I'm sure part of it is their existing contract, as opposed to an exclusivity contract. 12nm is a much more mature process, and in that regard it makes sense when trying to make a large part to use a proven process. 7nm wasn't available when NVIDIA was designing their RTX parts, so there was no way to estimate yield or improvements over time in 2018 (when they released the RTX parts)Now that the process is over a year old I'm sure they are working on a refreshed design to reduce power, increase clock, and add more or new functional units next year.
haukionkannel - Tuesday, July 23, 2019 - link
Nvidia headmaster Also said in one interview that They can make 12nm chips much cheaper than 7nm chips... and that is good reason not to go for the newest new production technology.rocky12345 - Tuesday, July 23, 2019 - link
Translation is 12nm is good enough right now we do not want to invest in 7nm as in not worth it because our power usage numbers are good enough right now.AMD on the other hand OMG our power numbers are through the roof we need a new node ASAP and that has worked out for them very well this time around. If Navi was on Global 14nm or 12nm the power usage would be insanely high for sure. TSMC's process is just better for GPU's than Global's. On the other hand TSMC's process not so good for CPU's as seen in Ryzen 3000 series and the lack luster clock speeds. Good thing those CPU's have a lto in them to make up for the lack of clock speed and they still perform like they are running at a higher speed than they are.
Rudde - Thursday, July 25, 2019 - link
TSMC 7nm HPC node is more optimized for power usage than GF 12nm. The same can be said about Intel 10nm compared to Intel 14nm. That said, TSMC 7nm is not far behind GF 12nm in performance.DanNeely - Wednesday, July 24, 2019 - link
IIRC this is a fairly traditional pattern with AMD/ATI being more aggressive about moving to new processes early on while NVidia waits until they're more mature.eastcoast_pete - Tuesday, July 23, 2019 - link
Thanks Ryan! I know this is a bit niche, but could you add a short test and paragraph or so on NVENC performance when you review NVIDIA cards?Ryan Smith - Tuesday, July 23, 2019 - link
Is there a specific test you'd like to see? NVENC is a fixed function block, so it's not something I normally look at in individual video card reviews.eastcoast_pete - Wednesday, July 24, 2019 - link
Thanks Ryan, appreciate you considering it! I am particularly interested in NVENC performance in encoding or transcoding to HEVC a 2160p video using ffMPEG with high quality (so, not NVENC default) settings, best at 10bit HDR. The clip currently used for handbrake tests of CPUs might serve for initial testing if it's captured in HDR. For gamers, it might be of interest to test this function to capture 1440p or 1080p gaming using settings appropriate for streaming. I haven't done that myself (yet), but believe some people here might provide suggestions.eastcoast_pete - Wednesday, July 24, 2019 - link
Forgot: some key questions are what, if any, difference in NVENC performance exists between cards, and (for gaming) , if and how using NVENC affects gaming. I am mostly interested in the first. Thanks!ballsystemlord - Saturday, July 27, 2019 - link
I've been curious about card encode performance on GPUs using ffmpeg for some time. ffmpeg can also use CUDA and opencl.ballsystemlord - Saturday, July 27, 2019 - link
If I were testing this I'd go for 10bit for x265 as suggested, but I'd test both the (8 bit) x264 and x265 (HEVC) codecs. The size of the video should be selected to allow many cards to compete. I'm guessing this would be 1440p for x265 and 1080p for x264.GreenReaper - Wednesday, July 24, 2019 - link
Fixed-function, but is it fixed-speed, or does it vary based on one or other of GPU/memory speed(s)?Rudde - Thursday, July 25, 2019 - link
Fixed-function logic is basically an asic (applications specific ic). It can only do one task, but it does the task well. It uses die area, and therefore increases manufacturing costs, but it shouldn't affect gpu performance in other tasks.Dark42 - Tuesday, July 23, 2019 - link
Considering the 99th PCTL fps values at 1440p:Tomb Raider: 42, Metro: 35, Division: 43
I wouldn't necessarily call the 2080 Super overpowered for 1440p.
irsmurf - Tuesday, July 23, 2019 - link
No GTX 1080 TI? 1080 TI / 2080 / 2080 Super / 2080 TI are the only comparisons I wanted to see. I see it's not available in Bench, either. :( Please add it.yetanotherhuman - Wednesday, July 24, 2019 - link
A real shame the 1080 Ti isn't in the benchmarking graphs, it's clearly the competitor card to the 2080 and 2080 Super. It would show just how little NVIDIA has done in giving us better performance to price in just over 2 yearsDanNeely - Wednesday, July 24, 2019 - link
It needs retested first. Look at Bench, it's not in the 2019 GPU bench yet; hopefully this is just a case of not gotten to it yet (2019 GPU bench is still fairly sparse vs 2018) and not a case of the card being assigned to someone other than Ryan and thus not being available for him to cycle through the GPU benchmark box.imaheadcase - Wednesday, July 24, 2019 - link
Its really weird you never include the 1080 Ti in the benchmarks.bill.rookard - Wednesday, July 24, 2019 - link
The 5700XT certainly is a spoiler of sorts, 80-90% of the performance for a little more than half the price.Ranger90125 - Thursday, July 25, 2019 - link
Good review...thanks. I'm assuming that the most important aspect of the Super cards i.e. the ray tracing perf, is very similar going from RTX2080 TO RTX2080 Super? ;)Kishoreshack - Thursday, July 25, 2019 - link
How do I donate to Anandtech for writing such excellent articles?bajs11 - Friday, July 26, 2019 - link
single digit improvement while still being overpriced as the original 2080Where I live the original 2080 still cost over 700 usd while the super are about 150 bucks more...
maybe they are a lot cheaper in the states
ballsystemlord - Saturday, July 27, 2019 - link
Spelling and grammar corrections so far:"However it's still requires more power than the RTX 2080 vanilla, ..."
Excess "s":
"However it still requires more power than the RTX 2080 vanilla, ..."
"...which its much better performance-per-dollar ratio."
"With", not "which":
"...with its much better performance-per-dollar ratio."
Beaver M. - Saturday, July 27, 2019 - link
So Nvidia now thinks 8 GB is enough from 2060S through 2080S? Sure sure.But I guess there are still enough victims who will buy this overpriced and not future proof crap.
DillholeMcRib - Sunday, July 28, 2019 - link
I have no idea why anyone would pay the Nvidia tax with the Radeon Navi stuff out.Right now I am using the 5700 XT and while it is a little loud, it matches up really well with the 2070 Super for $100 less. RTX specific stuff is virtually non existent.
Qasar - Sunday, July 28, 2019 - link
the argument, seems to be ray tracing above all else... nvidia has it, amd doesnt, so why buy amd ?i personally.. dont care about ray tracing, as the games i play currently, dont have it, and i dont see any games i want to to play in the future, having it as well, and the fact, that the rtx series, is priced way out of my price range
milkod2001 - Friday, August 2, 2019 - link
Buying GPU which is a little loud is and keep it for 1-2 years is not exactly super pleasing especially in room with somebody else. I'd pay $100 extra only to not hear loud GPU.D. Lister - Saturday, August 3, 2019 - link
@DillholeMcRibSo you have decided to pay the AMD tax from your ears, instead of the Nvidia tax from your pocket. Fair enough. Enjoy your fewer features and driver bugs. :P
Outdoor Limited - Tuesday, August 6, 2019 - link
Browse outdoor limited to get the latest ammunition deals on 9mm Ammo, 223 Ammo, 6.5 creedmoor ammo, 45 ACP and many other. Visit us now!https://www.outdoorlimited.com/
https://www.outdoorlimited.com/handgun-ammo/9mm-am...
Outdoor Limited - Tuesday, August 6, 2019 - link
Browse outdoor limited to get the latest ammunition deals on 9mm Ammo, 223 Ammo, 6.5 creedmoor ammo, 45 ACP and many other. Visit us now!https://www.outdoorlimited.com/
https://www.outdoorlimited.com/handgun-ammo/9mm-am...
pikobupe - Monday, January 27, 2020 - link
Any chance in seeing an RTX enabled comparison between the various card levels? https://bigbootytube.xxx