Yes. I could not care less as long as they offer killer performance. At long last, they finally offer appreciably more performance per dollar than Pascal. That is a win in my book.
Yeah the 10 series is still the king of value. I think people will find that some of these super cards are not as good as they seem. The 2070 super for example runs less cores at a higher clock to get close to the same performance of a 1080. But the clocks used are basically what every 1080 can do in its sleep when overclocked. Overclocking results on the 2070 super put it only up to the 1080 stock due to the lower headroom available. And many 1080s are clocked 90mhz higher or more by default. The 1080 super will fair even worse having basically the same amount of cores as the 1080.
While true, I think you guys are overlooking the value that the 2080 and 2080 Ti cards bring: NVLink. It's not the same as SLI. Going forward, this will allow these cards to retain much longer longevity on the market.
"value" he says. I paid $799 for my 1080ti and quite a few people said THAT was a high price. NVIDIA sure fooled them. I could care less about NVLink. I have very little reason to have more than 1 GPU in my system. The 1080ti more than keeps up at 4K at max details (or close to max details, sometimes I might have to turn AA down depending on the implementation used), and by the time it doesn't, there will be faster cards that do.
the 1080ti will go down in history as one of the longest lived cards ever. since the 2080 ti is not a significant step up especially when you consider price. the 1080ti basically spans two generations of dominance
I just cannot back nvlink / sli. it is proven that developer and nvidia support fall short far too often. Sure you can run 4k 120fps most of the time but its not supported well enough across all titles. and issues like frame timing and imput lag are introduced. sli was good in older apis when it had the option for driver based sli aa. but newer apis require custom aa implementation and nobody has taken the time to make this tech work properly.
Im mainly refering to how sli worked with older versions of direct x. In older versions the driver could force aa modes. With newer versions aa modes need to be programed in by the developer. For sli aa each card would render half of the samples for a given frame. The load on each card was almost the same. With alternating frame rendering each card renders every other frame and the load can vary from frame to frame. In the end it increases input lag or makes frame pacing less accurate or a little of both.
nvlink is sli. they changed the name and upgraded the bridge but the tech is functionally the same and has the same issues. (actually the issues are getting worse due to sli becoming less common and developers spending less time on it)
yep.. but 3dfx sli, and nvidia sli.. only share the abbreviation, nothing else... 3dfx SLI, Scan Line Interweave, nvidia sli Scalable link interface, 3dfx's version.. no need to create profiles or anything.. add the 2nd card.. see a performance boost right away... too bad that way of combining cards, doesnt work now a days...
The catch is on the used market you can pickup 1080 (non-Ti) for <$300. I recently picked one up for $250 off Facebook Market, and I've seen 1070Ti's for $200.
It's mostly people dumping and buying into the ray tracing train. And even with these cards, as fast as they are, you still need to spend a LOT (hundreds more) for a card not significantly more powerful than a 1080.
i meant the 2080 since that is the performance target of the 2070 super. stock to stock is close if you compare to a 2080 with 1710mhz boost. but many are clocked 1800 or higher and the overclocking headroom will still be much better on the old 2080 vs the 2070 super. its the same chip and you run into the frequency wall at the same range. the 2070 super just has less cores.
King of value my ass! The GTX 1080 Ti has always been too expensive and these RTX cards are just obscene. But that's what happens when they trick you with anchoring. If you don't understand, go watch "Let’s go whaling: Tricks for monetising mobile game players with free-to-play" video on YouTube and skip to 12:27
I bought the 2070 super for the ray tracing hardware, which the 1080 Ti does not have and doing ray tracing in software the 1080 Ti can't compete with the 2060 let alone the 2070 or the Super cards.
Yup. RTX cards are still priced as if they launched 3 years ago. Pathetic Nvidia and even more pathetic consumers who give them a dime. Nvidia completely destroyed the GPU market where only fools buy into it.
As someone that plays computer games just fine on crappy, slow iGPUs like Bay Trail graphics or a Radeon HD 6310, I can safely say that you can waste just as many hours of your life rotting away behind slow, cheap hardware if you're the slightest bit selective about the titles you pick to play on your hardware so yes, there is literally zero reason to give a flying you-know-what about what graphics card does what or even really care overly much about the sort of computer you currently own as long as the stupid thing boots up and all its buttons work.
the games i play.. wouldn't work as well on vid cards like the ones you use Peach :-) to graphically intensive, and when your turn the eye candy down.. kind of looks like games from the mid 90s in DOS....
Did they steal Gillette's marketing people? This is soon to be the 2070 Fusion Power Stealth Back Edition. It'll vibrate in your case to make your PCI-E pins rise up to be annihilated by the supreme power of this card.
You forgot the **Duper** ... Super **Duper** Turbo Hyper Championship Edition ;-)
Good luck competing in the 545mm2 TU104 "Die Per Wafer Category" (best guess: 95 (!) per 300mm). Navi 10? 226
I'm smelling "Offers You Cannot Refuse" combo deals on the Radeon RX 5700 series and the AMD Ryzen 3000s __ right off the bat. It's Jerry Sanders golden anniversary, after all ...
YES. I don't necessarily mind pointless numbering schemes (2070), but I really just wish they'd use all the digits - call them the '2075' or something. If you have the same number of CUDA cores but a different clock, THEN maybe add a suffix (like Ti).
First, I don't know why anyone even cares what the name is, and frankly it's not like "2080 RTX" or "5700 XT" is hardly a "fantastically inspiring" name either. Seriously, if that's all you can find to complain about, then you must really REALLY like these cards?
Second, even when I force myself to consider your point seriously, I still don't get it. "Super" is directly from Latin, and literally means "above" or "beyond" or "in additon." How is this not an appropriate description of the way these cards alter their previous namesakes, exactly? What woul YOU call it, oh wise one Pino?
K, err People are used to number monikers I'm guessing Like 780, 980, 1080, 2080
The thing to keep in mind is that partner cards are already using long and stupid names, adding OG several X's, Extreme, II, ... They get really long names, throwing a super in there is not going to be that easy for people not tech savvy
Also regarding the Latin "super", but in today's world wouldn't it have been better to use: Legendary Epic Ultra rare
Problem with the numericals is thtat Nvidia shat on that with the 2000 series. the XX70X segment was shifted up and replaced the XX80 slot in price.
So 2070 replaced the 1080, the 2080 replaced the TI, and the 2080TI replaced the TITAN slots of the older generations. If you take the models out and check just the price point, the performance boost is minimal in that generation.
Jezzz, some serious fan boys here. Get a life dude! I never said it's a bad product, I have a RTX 2080 btw and love it. And the bad marketing remarks goes for both Nvidia and AMD, they both suck. They should just make it easier for the average Joe who wants to enjoy some gaming. There is no easy way for an average consumer to get a VGA without reading a bunch of articles trying to figure out the difference between GTX 1660, GTX 1660TI, RTX 2060, RTX 2060 super. It could be as easy as RTX 2083, RTX 2085, RTX 2087 instead of RTX 2080, RTX 2080 super and RTX 2080 TI
I'd take the cooler off, slowly, bit by bit, screw by screw, fan blade by fan blade and call it the XXX edition. It'll be fucked after that so seems appropriate.
I think Nvidia just reinforced my belief in 'never buy 1st gen products'. RTX clearly was an experiment on Nvidia's part because a lack of competition can make you carefree like that.
Personally, while I don't necessarily like the "Super" moniker, I do appreciate that nVidia is at least keeping the RTX 2XXX name. They could have called it a RTX 3XXX instead, in which case, it would be somewhat misleading since the underpinning technology and even the GPU die is from the 2XXX series.
Truthfully, it's what I had on hand. I was preparing for the RX 5700 review when NV interjected with the RTX 20 series Super launch. Plus the Radeon VII is a $699 video card, so it's not really a competitor to the $399/$499 Super cards.
Besides, the real test comes when the embargo is lifted on the RX 5700 series, since that's what people will be cross-shopping (well, for those of us that actually do that, rather than getting corporate logos tattooed on our bodies).
I wonder if AMD will do anything regarding VII pricing, as $699 is clearly not viable against the 2080 Super later this month. $599 would be more suitable, but maybe they'll go for $649 because of the 16GB of memory.
The Radeon VII has 16GB of HBM2 and is built on an expensive process. I doubt they can cut the price much. They never intended it to be a high volume part. It will probably sell as a poor man's version of NVIDIA's Titan. The fact that they are selling the Radeon VII at all means that their Radeon Instinct MI50 isn't in much demand.
I reckon they’ll keep the price on the VII where it is.
While it’s a card that can’t compete with Nvidia’s cards on neither performance nor price whereas gaming is concerned, it still has the edge in compute.
I reckon most of the folks who get one, need it for compute performance and don’t want to spend thousands of dollars for a Quadro or Pro AMD card.
"The fact that they are selling the Radeon VII at all means that their Radeon Instinct MI50 isn't in much demand." Not likely the case. AMD had to have something, a market presence in the segment until Navi. Especially since Navi will be released slowly and Nvidia already had a new part out.
You might be right, but AMD's got a limited ability to lower the price on any of its Vega-based GPUs, partly because the HBMII memory on it is ridiculously expensive, and partly because the sheer wattage of these cards and chips means that they need to have pretty beefy card/cooling designs, etc.
That's why we never really saw great deals on the Vega 56/64 even after the RTX cards came out with better performance/$ (or /w) or most consumer applications.
Much more important are the prices for the 5700 (XT). If AMD's Computex performance figures are correct, we now have the situation:
5700 XT at 449$ is ~5-10% faster then the 2060 Super at 399$. 5700 at 379$ beats is ~10-15% faster then the 2060 at 349$. Also there is the game bundle situation in favor of nvidia.
With these prices, the 5700 makes no sense - for just 20$ more you get a much better 2060 Super. Similar for the 5700 XT: 50$ more for just 5-10% is too much. AMD must lower their prices, the question is by how much? If AMD brings the 5700 XT down to 399$ and 5700 to 349$ then Nvidia is in a world of hurt. Nvidia can't lower their prices too much because their chips are big and expensive and can't react with new chips anytime soon. While AMD has room for a price war with the small 7nm chips and more Navi variants on the horizon.
> Nvidia can't lower their prices too much because their chips are big and expensive
NVIDIA can lower their prices all they want because they've got cash in the bank. But they won't, firstly because they just did, and secondly because they already have the market sewn up. Even if Navi does undercut Turing pricing, the former still has to overcome the market dominance of the latter (and Pascal).
"Nvidia can't lower their prices too much because their chips are big and expensive" . You seem to know quite a lot about how much money Nvidia spends on making their products. WikiLeaks or pure divine inspiration?
from their review, it seems to be between the 2060 FE and the 2060 Super FE, if you HAD to get a GPU right now, rather than weighting for the new AMD releases, it would be a no brainer to go for the 2060 rather than the vega 64 as the only real advantage the 64 has is more memory, while the 2060 is cooler, quieter and uses less power.
That only if you insist on having everything on ultra (which isn't the best option considering the differences between high and ultra are so small you wouldn't know the difference).
That isn't remotely true.. I have been gaming at native 4k now since 2017 with a 1080ti, 5ghz 8700k and 16GB DDR4 3600mhz ram and today I can still crank out 60fps at 4k with the latest and greatest titles. The reason no one thinks its possible is because they are stubborn hard heads and think that every single last setting has to be cranked to full ultra or they must use absurd levels of AA which is completely stupid at 4k to start with... A lot of modern games have a few settings you can tweak which rape performance for very little IF anything in return visually over their lower settings such as high or very high instead of ultra and when you take the time to find these settings 60fps at 4k with a decent rig isnt hard to achieve at all.
I would love to know settings on these games to get constant 60fps at 4k. The one's benchmarked here I have issues with at 4k, even when lowering settings. I'm not saying you can't get there, avg fps seem to dictate you can, but there are just too many dips below 60 that ruin the experience. 1440 seems like the sweet spot.
Thanks for breaking the pervasive stupidity on 4k60 being impossible. I, too, have been playing latest gen titles, just removing or disabling the awful settings (motion blur, godrays, etc), and using medium settings on the two big performance hitters (shadows and AA), and the game still looks 90% as good with 50~60% more framerate.
60 FPS just isn't needed by everyone, I'm happy with mid 40s if it doesn't dip hard so my 970 can handle 1440p and even 4k on some games. Some folks are more sensitive than others.
To be honest, I don’t really care that much about 4K/60 hz.
The difference over 2K/1440p is neglible (especially if you’re not immediately next to the monitor.)
Having a raytraced 1440p picture makes a bigger difference, and hopefully the industry is moving towards that instead of ever higher frame rates and ever higher resolutions.
Fair, but the benchmarks REALLY suggest that you should spend the extra $50 if you can afford it, since you get more, faster (well, wider) RAM on top of the CUDA core increase. Still, the bigger point is that $350-400 used to get you a 'mid range' GPU in a given series, and now it's the "entry level."
I'd be foolish to jump on any of these cards right now until after the RX 5700 reviews hit. But saying that, the RTX 2060 Super does look to be one very attractive card now. The rest of the line up feels like this is where nVidia should have originally positioned the RTX line up nearly a year ago as it would have given Pascal owners more of a reason to upgrade.
This does make me wonder how much longer until nVidia will have their 7 nm chips ready. If they were due at the end of the year, why not just do a small price cut if the Radeon RX line up is competitive and wait it out? If nVidia's 7 nm chips are further out, this refresh makes far more sense but has me scratching my head as to what nVidia's hold up could be. If those 7 nm chips arrive in 2020, then AMD will have had 7 nm products out on the market (though for data centers) for a full year ahead of nVidia which again seems to be weird.
To be fair, you can't get the Nvidia Super cards until 7/9, after AMD's cards are out.
When Nvidia launched the RTX series, I thought that they had to be pretty confident in their design to be doing it on 12nm. They probably got great yields from day 1, and I'm really surprised that they weren't able to meet demand from the time they launched.
The 12FFN process is mature and the yields are good, but the RTX cards have large die sizes because of the features they have. I think prices do tend to go down even 2 or 3 years after a node comes out. Also, I'm willing to bet that GDDR6 prices are lower now than 9 months ago.
I'm pretty sure NVIDIA won't be shipping any 7 nm parts in an significant volume until the second half of 2020. At that time NVIDIA needs to deliver its next generation data center GPU for the Perlmutter supercomputer. I guess they will also launch gaming GPUs because September or October of 2020 would be around the right time for it.
As far as NVIDIA's hold up, perhaps it's the current cost of the 7 nm node. AMD has no choice but to go to 7 nm. They need the power efficiency that the new node offers and they are probably willing to pay more per transistor to get it. NVIDIA doesn't need the power efficiency at the moment, so they are more willing to keep their costs down.
Navi should be cheaper IMO. Navi has way more smaller die (2.5x more smaller than Vega 64 if I recall) and uses GDDR6 instead of HBM. I don't know why AMD priced them that high tho...
"the performance, partially a consequence of going with 12nm, just isn’t there"
People should have been weaned off this by now: process shrinks stopped inherently boosting performance years ago. Power consumption drops and perf/watt increases, but 'perf/transistor' continues to decrease (due to leakage increasing as packing density grows, coupled with power density increases) as it has done for some time, and cost/transistor has been going up since 28nm. A brief period of making dies bigger and bigger (and more and more expensive) has culminated in reticle-limit dies like GV100 and TU102, but that now makes start the wall process scaling hit some time ago in reality. This is only going to continue as processes shrink further. Cost/transistor will rise, perf/transistor will drop, and increasing performance means dies will continue to grow. Performance gains will continue to come from architectural changes, not process changes. Unless you're hitting the reticle limit AND cannot split your die into multiple dies due to latency reasons only then does it make any sense to move to a smaller process, and you will take a hit to both cost/perf as well as perf/transistor in doing so which may eat any gains from packing more transistors in.
I agree that performance per transistor can drop with die sizes due to leakage. And that new nodes are expensive at first. But once a node is mature, the cost per transistor should be lower than previous node. If that wasn't the case than the semi conductor business would be completely sunk.
Changes and improvements in software need to be done which are going very slowly because it takes a lot of work(coding) and knowledge but mostly hard work which younger generations are not willing to do (distracted by social networking and gaming).
At this point, until a GPU overhaul is made like Zen was for the CPU space, we are not going to see much changes. AMD will need to introduce chiplet design gpus for things to really change.
How would a chiplet design really help GPUs, though? You don't have nearly the kind of I/O requirements on-chip for a GPU as you do a CPU. You're already massively parallel, and you can just shut off defective "cores" and put those parts in a lower bin; going to a 2 (or more) chip setup would probably just hurt performance.
Chiplets require more expensive packaging and to really scale, you have to design with that concept in mind. Previously the cost-benefit from an engineering standpoint was simply to eat the cost going with larger dies and release harvested products due to the rarity of fully functional chips. The costs to migrate to newer nodes is increasing and the necessity of using multi-patterning have put tighter limits on how big chips can be. Chiplets are the way forward as they solve the current issues in manufacturing and the packaging doesn't carry the same premium as before.
I don't think a chiplet would help a consumer GPU much. Chiplets allow you to put more transistors than you otherwise would be able to. But consumer GPUs don't reach the reticle limit. More transistors would also increase the price of the GPUs. The cost per transistor isn't going down as much from node to node as it used to. So if AMD tried to boost performance by building a big chip with multiple chiplets it would be very expensive not only because of the complexity of chiplet technology but also because of the cost of all those transistors.
What AMD needs to do is to continue to modify their architecture. The RDNA is a good first step. It's still behind NVIDIA in memory bandwidth, energy, and workload efficiency, but it looks like it makes a good jump over GCN in those areas.
Seriously... everything you are saying is only your opinion. Chiplet is bringing better yield, better binning, reduce waste and end up giving more margins due to modularity.
As of now, we have no clue what RDNA can provide, the reviews are not even out yet and it is the first kick at the can. If all the game industry is backing up AMD, guess what, there must be a reason for it.
Chiplets bring better yields, yes, but at the expense of worse power efficiency. If you put the chiplets on silicon interposers it's very expensive, too. So you must put them on less expensive subtrate and that affects the communication speed. Maybe when the technology is more mature it could make sense, but there's no sense to introduce the complexity now. In any case, AMD fans talk about chiplets as if AMD is the only one pursuing them. The whole industry is. Intel, AMD, NVIDIA,TSMC, everybody.
Why do you say the game industry is "backing up AMD"? What does that mean?
Umm... the TU102 dies is 754 mm^2. That is highend fair enough but it is near the limit with only the 818 mm^2 of the GV100 being larger.
For consumer parts though? nVidia has routinely used chips larger than 500 mm^2 for the high end products but now a chip that large is being fitted into the RTX 2070 Super. That is still a very big chip that is difficult to yield fully functional units.
As prices for new process nodes goes up, chiplets do offer start to offer advantages in prices as they don't have to be manufactured on cutting edge nodes: if moving 5 nm in the future is not immediately cost effective, wait it out and simply add more 7 nm chiplets for a product refresh. The big advantage new nodes will be bringing is lower power consumption which limits pretty much everything. However, scaling back on clocks and voltage does permit significant power reductions and thus more dies for a given power budget.
The big (potential) upside of chiplets is, of course, the ability to make cards of various capabilities simply by connecting more chiplets to the interconnect. Also, there is an inherent upside to making several, smaller building blocks (chiplets) than one large die: you don't have to throw an entire huge die with many billions of transistors out just because of one or two significant defects. Much cheaper. The big challenge is the interconnect fabric: whoever is the first to get that right has a huge advantage. Without any insider knowledge of what's going on at NVIDIA, I would be amazed if they aren't working on their own interconnect and chiplet approach with great intensity.
You say much cheaper.. the dies are cheaper, yes. But I don't believe the entire package is cheaper. Not currently, anyway, and they seemingly won't be cheaper if they are build on a silicon interposer so they must be built on other substrate.
NVIDIA are working on their own interconnect and chiplet and have been for over 5 years or so, just like everyone else. That doesn't mean they are going to put it on a consumer GPU any time soon. You need the die costs to be pretty high, the interconnect to be pretty energy efficient, and the substrate cost to be low for it to make sense to add such complexity to something that could otherwise be made with one die.
Nvidia lowering prices to steal AMD's thunder at launch is nothing new.
But I think that it is telling that Nvidia had to crank up TDP and is willing to bundle games to do it. Time will tell exactly how these cards stack up, but it seems very clear that AMD has put the screws to Nvidia to produce more value in the market.
Why didn't the article include the specs/comps for the 2080ti? I would have liked to see the 2080ti on those charts so I could compare it. Might have been nice to include some of the 10xx generation just for comparison, but I can understand why those wouldn't be relevant.
They didn't run anything with RTX on, probably because AMD cards don't have DXR drivers, yet, so there is no competition to compare it to. It would take a subjective judgment to say "using XYZ setting with RTX on looks better and runs faster, let's take a look at those numbers." There is a place in the games/hardware enthusiast sphere for some analysis along those lines, but Anandtech doesn't seem to try to fill that niche.
Oh, I must have been confused, I thought this was a review for new nVidia parts, I guess you are saying it's a feels feels article for people who own the lone four year old AMD part.......?
Use lower quality settings that are markedly slower, ignore both a significant portion of the die space and the relevant performance implications of said die space and purposefully avoid any RTX benches of an RTX card for an RTX review.
That is what you are defending. Reasonable, if you think so :)
Benchmarks that can't be run on a wide variety of hardware are meaningless.
If nVidia wants to include special features that only RTX cards can utilize, that's fine. But there's no sense including them in a general-use benchmark, because there's only a tiny handful of cards that can use them, and they're all the same chip anyways.
I'm sure nVidia will be glad to tell you how many RTX Ops(bungholiomarks, whatever) the new cards get. It will remain a meaningless number.
Also, the performance implications of the die space are "lost shaders". nVidia put this in solely for the compute market and THEN turnd around to try and figure out a gimmick they could sell it to gamers with to obscure the meaningful performance loss.
Ray tracing is a feature of DirectX 12. Metro exodus is using the DirectX 12 implementation for ray tracing.
DXR runs on non RTX cards just fine, simply requires driver support for a DirectX feature.
The ray tracing cores don't move the needle for general compute at all, don't know what helmet head told you that but they have no clue what they are talking about. The tensor cores, otoh, those are very useful for certain compute tasks, they are only used for denoising on the ray tracing side, they aren't the intersection compute units, the feature they bring to the table is DLSS and we can all ignore that forever and that's fine.
The key new feature for the next gen consoles, the big feature every engine developer is pushing for, what had been considered the holy grail of real time graphics for decades, that I don't get why you would ignore in an article with RTX as the subject.
I don't know about that, maybe Ryan just has no clue and that's why he rigged the test to make the singular legacy card look better?
Can you imagine if they benched the Vega64 in a game supporting ASync compute and disabled it for their benches? This is obviously worse as this isn't just about how instructions are scheduled, but has a major impact on image quality while being faster, but you should give him the benefit of the doubt, ignorance or maybe even promotional considerations from other parties to the site is what caused what appears to be fanboy shilling.
Ben, it's not the answer you're going to like, but it's for apples-to-apples comparisons. I need to be able to compare cards from all vendors (including Intel, if necessary), all running the same features. This is the same benchmark suite you'll see again in 4 days, as all of this is standardized for future use.
Then why bother reviewing these products? You are running a much lower quality setting that runs markedly slower to appease the vendor that refuses to support the feature in their drivers while simultaneously leaving out the singular performance characteristic that we couldn't figure out by looking at the specs.
If AMD supported an OpenCL mode in say LuxMark that was both faster and higher accuracy but it wasn't supported by nVidia would you honestly not run it?
Also, you didn't even mention it. We ran lower quality settings that were slower on these card because AMD refuses to add driver support would have at least explained why you did it.
Furthermore, the amount of die space dedicated to this feature inr these parts to not see a singular benchmark? Why bother reviewing them at all?
More cards will be going into Bench ahead of the RX 5700 launch. This review was very compressed for time due to everything else going on and the need to setup (and validate) a new GPU benchmark suite.
You can look at the RTX 2080 review. The RTX 2080 is slightly faster than the GTX 1080Ti and the RTX 2070 Super is slightly slower than the RTX 2080. Therefore, the RTX 2070 Super should be around GTX 1080Ti performance.
Would have been nice to see some GTX numbers in there for comparison, I can't be the only person still running a 1080 or something that is still at least semi competitive. Hell I've got a house full of 1060's on 1080p screens and haven't seen any reason to touch them yet. Also, F these prices. The new norm for GPU cost blows.
I don't think we're going to see much progress with 1080p, not for a long time. We have 60fps and little sign of increasing graphics fidelity which is going to push that fps down on any hardware which currently achieves it.
@Ryan: Firstly, thanks for the quick review of these "S" cards by Nvidia. I have two questions about your description of the 2070s, you write "All told, NVIDIA has disabled 8 of TU104’s 48 SMs here, leaving a card with 40 SMs, or 2560 Turing CUDA cores." My questions are: Are those chips lower binned (partially defective) big Turings that are then "cut" down to exactly 40 SMs? And, regardless of the binning question, how does Nvidia disable SMs? Laser them out? Thanks for answering!
Thanks Ryan! So, maybe they do have a bunch of lower binned Turings that needed a home (and a paying customer). Dating myself here, but, many, many years ago, NVIDIA had a GeForce card that could be made into a Quattro that cost 3x that by changing a connection with a soldering iron, and a very (!) steady hand. I never dared to try, as one wrong move with that soldering tip could trash the entire card, no repair possible.
Ah yes, you could do similar tricks with Pentium CPUs back in the day, or sometimes just by flicking a DIP switch to get a 600$ processor out of your 200$ processor.
Not that long ago. The GTX 690 was launched only 7 years ago. I was tempted to try the Quadro mod for fun but eventually sold the card as is when I switched to Pascal.
I have a 1080 and running @ 3440*1440 I've never had to turn down graphics settings a single notch due to FPS issues but I've had to due to lack of VRAM on a few occasions...
8 GB is fine for the 2060. But not fine for the 2070 and 2080. They should both have at least 11 GB. And the 2080 Ti should have at least 14. I am happy for the people who bought a 1080 Ti. They have a card with a very long breath. I wish I would have been so smart and bought one when they were around $600 to $700.
Is it not. It hasn't been updated in a couple of years now, so I've tossed it out.
However if there's a newer, similar benchmark you'd like to see, then I'd be eager to hear it. The current GPU compute benchmark situation is rather ugly.
I was wondering the same thing. My GPUs spend as much time mapping the stars, looking for little green men, and working to cure disease as they do playing games. With this being one of the few sites that puts any compute benchmarks into GPU reviews (for which I am grateful), I would be fine with you keeping an older compute test or two. I'm not sure crunching for science (in whatever realm) sees apps change as often as games, and for those of us who do that, I don't think we'll complain. I understand every benchmark takes time to run, just know there are those of us who do look forward to F@H tests. Or, maybe I'm just stuck in the past, where for years, in EVERY new GPU review, I anxiously looked to see if THIS would be the card that could run Crysis :-)
FAHBench is based on core21 which is still the current workhorse for molecular dynamics simulations at Folding@Home. There is a new core22 in development but it is still in beta testing. I do see links to Anandtech's FAHbench results in multiple forums including the folding forum at Stanford so it is still important to those of us that support science with our GPUs.
Very interesting. That's good to know! I was under the impression that the project had already discarded core21. FAHBench is easy enough to run, so that wouldn't be too hard to re-integrate.
With the Benchmarks now finally out. Navi is DOA if they don't reduce pricing. Why would anyone buy 5700 XT that is going to have less performance and no hardware Raytrace Support. RTX 2070 Super is best value card out there. Got $100 price drop (Foundation Edition) and 15% boost in Performance. With very little overclocking you get the Performance of the RTX 2080 Standard Addition for $499!! Best Deal out there and it is the Founders Edition which has higher Quality GPU.
Well, we haven't seen the Navi tests yet, but AMD will surely have to reduce pricing for it. That seems to be NVIDIA's thrust here. It's not the first time they've done it.
The timing shows it has a lot to do with Navi. NVIDIA only has a dominant market share as long as they defend it. Navi is AMD's most competitive product in a while, and it is currently targeting the most profitable part of the stack.
We must see a price cut for Navi or they won't sell many and they will build up inventory. he other option is to stop making them, which doesn't do anything to reduce the fixed costs of r&d for the chip. AMD will try to maximize their profits/minimize their losses, or they might even try to gain some market share if the feel they are financially in good enough shape to do that. By RX sales do you mean Vega? There are hardly any Vega sales to begin with. They want Navi to be far more successful than Vega was. They will simply stop making Vega or take a write down on them if they have to in order to get Navi out the door.
At current price Navi gives them almost no profit because when you have minuscule sales but large r&d costs you can sell each one for $10,000 and still lose money. They must cut the price in order to turn any profit.
All the development costs have already been accounted for in R&D spending. Vega has way too many OEM costumers, even if they can gamer cards, to cease production. Those have better margins.
" Nvidia’s problem is that Turing was “too good”, so many gamers out there are hanging on to their 1070’s and 1080’s and don’t see a reason to upgrade. " heh.. i have a 1060, and i would need to go to at least a 2070, but at those prices.. its out of my price range...
around the $500 cdn mark.. and cause of that, and the part that the 2070's start at what looks like $680, it will be a while before my 1060 is upgraded. im not the only one either.. most of my friends would like to upgrade their cards too.. but the price of the 20 series.. is also too expensive for them as well...
The Super series is because the original 20x series wasn't a big enough improvement to justify upgrading, for sure.
But the timing, within a week of RX 5700, is obviously deliberate.
I think we all agree that AMD will have to cut the prices of 5700s or they will sell very few. But they do not have big margins not on 7nm. Quite possibly they might not have the margins to make much of a price cut.
If the RX 5700 hadn't come along, we'd have been waiting much longer for the Super series.
" The Super series is because the original 20x series wasn't a big enough improvement to justify upgrading, for sure " yes.. because nvidia, priced most of the 20 series out of most of its customers reach... and the super series.. is STILL out of most peoples price range VS what they have now, and there for, cor the price.. isnt worth it
" But they do not have big margins not on 7nm. " and you read this where ? or is it just your own speculation ?
Navi would have been pretty DOA irregardless of the Super cards.
But nah, I don’t see them lowering the prices. Why would they?
Between the hardcore AMD fans who’d never get an Nvidia card regardless of performance, and the folks who need something good at compute at a reasonable price, they’ll move enough Navi cards to make a pretty penny.
I don't think you realize how much it costs to tape out a 7 nm chip let alone how much it costs to develop a new GPU architecture. There are no 40% margins without volume sales.
I mean maybe they can have a 40% gross margin but that's not very useful if they don't have any operating income. There would be no healthy operating margin without volume sales.
Why? If you don't care about RTX support, but do care about free and open source driver support on Linux. Admittedly not a huge market, but it's there.
Long term Radeon Fan here. but probs gonna make the switch.
That RTX 2070 Super looks awfully good to me. And I'm absolutely *in love* with the Turing reference card design, and the chrome/shiny Super one looks so good. Aaaaa what is wrong with me? Haha - I just feel this, plus some ability to use hardware RT on my 144 Hz 1080p monitor in Metro Exodus (Something I'm waiting to replay with DXR), and almost a 2080 for 200 bucks less.
We'll see what AMD cards are like, but 2070 Super is probably going to be faster than the 5700XT and with the added features etc, I feel it will offer the same or more value whilst giving me that ability to play with DXR.
Course, there's always the 3rd option of: Don't buy anything this year cuz all I play is Warframe and the RX 590 is giving me 120FPS+ in that. Buuuut...
For me personally, the folding @ home fp32 and fp64 are very revealing of a cards compute performance. I think AT authors were looking into it, but if you added a compute benchmark involving blender for 2019 I'd be tickled. Thanks!
F@H fp64 tests were tossed a while back when both teams green and red castrated double precision (DP) capability in consumer cards. AMD had stellar DP ratios later than Nvidia did I believe, but neither does any more. On the other hand, I'm very curious to see what kind of scientific app could make use of the blistering half precision capability of today's cards.
I only skimmed the article but I didn't see any info on if RTX performance also increased across the line? As someone who owns a 2560x1440 monitor, any "RTX" featured card I get has to be able to perform at 2560x1440x60fps+ "RTX-ON" or its a worthless feature to me. Last time i checked the 2080 *barely* met that spec in current games and the 2080ti was really you best choice if you wanted to maintain a solid 60fps with RTX.
No, it's a fair question, because we don't know how fast those cores are running, or if that impacts everything else (considering they might take significant power).
I agree, does RTX benefit from the additional clock speed and are more of the cores enabled, or does the GPU downclock when running RTX like an Intel processor when doing AVX?
I hate to be that guy, but did you read the article? It answers all of that.
For example, the 2070S is a 2080 chip with a few of the CUDA cores fused off, but is clocked 100 MHz higher than the 2080 and 200 MHz higher than the old 2070.
Are "CUDA cores" the same as "RT cores"? The article actually talks about "SMs" - a term not defined before use. Perhaps they are all the same thing. In any case, not in the specifications, so easy to miss.
Also, are all parts of the CPU in the same clock domain? If not, the speed of one part may not relate to another. (And if so, they're arguably not separate "cores".)
Why are you only showing cards from the current generation in the benchmarks? Most of the people who opt for a lower priced card like the RTX 2070 aren't going to be in the market for getting a card upgrade to 2070 Super. Looking at the Steam Survey, we see that the most owned cards come from the last generation and the 750Ti is still in the top 10 of most popular cards! Owners of cards even one generation old can't compare their cards in the GPU 2019 benchmark section as it is only populated by new cards. It seems like you've neglected to consider the audience who would be in the market for old cards.
"Why are you only showing cards from the current generation in the benchmarks?"
Short answer: lack of time. It takes a lot of time to put together a new GPU benchmarking suite, and NVIDIA's launch inopportunely arrived right in the middle of that. So I only had a few days to benchmark cards.
GPU Bench 2019 will get filled out with more cards over time, including 980 series cards.
If AMD prices the RX5700 at $299 and 5700XT at $399, it will absolutely devastate NVidia. I kinda doubt AMD has the marketing team to understand that, but semi aggressive high end product prices can bring them half of the GPU market, especially if well executed with Ryzon alliance. AMD has once in a lifetime opportunity now. Next year, Intel will enter the game, and it will be very competitive market to do anything with high margins.
LOL, at those prices it would devastate AMD. They have employees to pay, as well as foundries for the chips. They're not a charity. They need to pay for the 8GB of RAM and the dies at 7nm won't be cheap now, maybe next yr. These dies are bigger than CPUs. Compared to a CPU you're getting a PCB and DRAM as well so you can't compare it to the Ryzen CPU pricing.
Please cover AMDs input delay reduction technology in the review of the 5700 series along with Nvidias. It's not just about raw FPS (or frame times). Input delay matters a lot.
Why wasn't the Radeon VII included in the charts? Just curious, I may have missed why as I only skimmed the article. I'll stick with my 1080ti for now. It does a great job and costed me quite a bit less than current top of the line NVIDIA cards. Maybe next year NVIDIA will offer something worth having.
Hrm. Looking at the cards else where, it appears that the RTX 2070 Super supports nvLink. A dual RTX 2070 Super is an interesting alternative to the RTX 2080 and RTX 2080 Super which might lead in performance per dollar over those cards. A dual RTX 2070 Super setup might give a single RTX 2080 Ti a performance challenge due to the higher clocks on the RTX 2070 Super. This would be an intersting thing to test alongside the RTX 2080 Super.
Meanwhile, rip us budget plebs who were looking for improvements in $100-150 range. The 1650 is a huge disappointment and I see no new cards from AMD on the horizon.
I've got RX590 for $160 a month ago, and it does everything 1080p on Ultra settings. I bet AMD will have at least a dozen cards under $200 range. 7nm tech makes things quadro chipper for them. I am not bashing NVidia technology but it's simply pricey for a retail consumer. Apparently the future will be renting play time from cloud gaming aka Google Play
Blah, blah, whatever. They still cost too much. The 'cheap' card you dismissed is priced like the flagship ten years ago. Real incomes have, if anything, gone down. If Nvidia wants to be the Apple of GPUs, they're welcome to it, but they're going end to with similar marketshare in the desktop space.
I wish I could see how the 2060 Super compares to the previous generation like the GTX 1070. I can't even do that in your Bench, apparently, which is disappointing. Have to go searching other sites to find the answer to the most important question of just how much greater performance a 2060 Super offers over the GTX 1070 or 1070 Ti.
$450-500 for mid range cards ? Think I'll buy a $250 second hand Vega 56, undervolt it, and play quite happily at 1440p. Maybe at high instead of ultra, but for $250 I can live with that.
There is nothing in exciting in 2019 video cards unless AMD will introduce massive price cuts considering the small die size of the chips in the RX 5700
2060 super is worse value then a 2060 was(fps/$), value went down not up. It only looks like it went up when you compared it to the extremely poor value 2070.
2070 super is better value then the only 2070, same price but higher performance. But, yet again the old 2070 was crap value, so ya its better....but its still not good.
Compared to the cards from 3.5 years ago, the 20 series super cards are still rather poor value. We should have had much more performance for the same $s by now.
nVidia's problem is 4K performance issue. Every two years period nVidia need to provide new cards that faster two name cards. For example, GTX 980Ti = GTX 1070. GTX 1080Ti = RTX 2070. Definitely RTX 2070 is much much slower than 1080Ti. That means nVidia hugely reduced their GPU speed (reduced CUDA cores). No one really care about ray tracing if it can not run 4K 60+ FPS. GTX 10 series to RTX Super is 3 years long !!! But 2070 Super still slower than 1080Ti !!! and 2080 Super should be same price as GTX1080 ($499~$559) not $699 !!! $699 is for 2080Ti. Even now 2080Ti still extremely overpriced ($1249) no even close MSRP $999. and 2080Ti is one year old already. so price should be $599 !!! and nVidia should release 2080Ti Super for $699. and old card like 2060 should price $199. 2070 = $299. 2080 = $459. 2080Ti = $599. and new crap card should only add $50 only. so 2060 Super = $249. 2070 Super = $349. 2080 Super = $499. 2080Ti Super = $749.
tech press needs to give more shit to both Nvidia and AMD for pricing this shit so high. xx60 class for $400 c'mon, this was once $250-300. Same goes for AMD, but they are just price matching -10%, but they'd price just as high if they were on the top, it's just going out of hand. And let's not even get into the "higher" end cards, because prices there are just beyond help. Majority of the people on this planet don't earn enough to even buy the 2070, let alone 2080 or 2080Ti, let alone the new 2060 Super. It's beyond belief and the tech press is quiet and even praises them for delivering all of this at this prices, because it's 10-15% faster...
It's no different than with CPUs. Improvements are harder to find. For sure, cards *are* faster per price with each new generation.
Improved architectures and smaller processes have allowed more fps in the ~300W limit, resulting in new higher pricing tiers. But all cards, at all price levels, are faster.
And prices are still too high. And RTX is still not a thing. And once again, Nvidia buyers take it rough, dry and end up with a sore backside. It's amazing to me. Nvidia speeds them up a bit, runs roughshod over their customers keeps prices still too high and suddenly everyone and their goldfish is praising Nvidia like they are the second coming here to save us all from life in the pit of hell. Not buying it, literally and figuratively. Just say, "NO!"
Just saw this on videocardz.com, which often gets leaked (and correct) information, about AMD supposedly lowering the prices of its Navi cards for the July 7th launch. Verbatim, copied from their posting: "The information on new pricing is under embargo till July 6th. We will let know you know as soon as we hear more. Update, new pricing (two confirmations):
As they write, the information is under embargo until tomorrow (so Ryan can't write about it if he still wants to get pre-release review samples to test), but they (videocardz) seem to not be so troubled about spilling the beans. If true, I like to say (even as a critic of NVIDIA and its over-pricing when they can): Thank you, NVIDIA. Let's hope these price cuts by AMD are true, and that this is the beginning of a long-overdue price war!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
281 Comments
Back to Article
Pino - Tuesday, July 2, 2019 - link
Worst naming ever by their marketing department!maroon1 - Tuesday, July 2, 2019 - link
Who cares about names ?They can even call it Trash. If it delivers good performance then I will buy it
Hifihedgehog - Tuesday, July 2, 2019 - link
Yes. I could not care less as long as they offer killer performance. At long last, they finally offer appreciably more performance per dollar than Pascal. That is a win in my book.zmatt - Tuesday, July 2, 2019 - link
You still have to pay $700 to beat the raw flops of a 1080ti. No deal.Opencg - Tuesday, July 2, 2019 - link
Yeah the 10 series is still the king of value. I think people will find that some of these super cards are not as good as they seem. The 2070 super for example runs less cores at a higher clock to get close to the same performance of a 1080. But the clocks used are basically what every 1080 can do in its sleep when overclocked. Overclocking results on the 2070 super put it only up to the 1080 stock due to the lower headroom available. And many 1080s are clocked 90mhz higher or more by default. The 1080 super will fair even worse having basically the same amount of cores as the 1080.pandemonium - Wednesday, July 3, 2019 - link
While true, I think you guys are overlooking the value that the 2080 and 2080 Ti cards bring: NVLink. It's not the same as SLI. Going forward, this will allow these cards to retain much longer longevity on the market.eek2121 - Wednesday, July 3, 2019 - link
"value" he says. I paid $799 for my 1080ti and quite a few people said THAT was a high price. NVIDIA sure fooled them. I could care less about NVLink. I have very little reason to have more than 1 GPU in my system. The 1080ti more than keeps up at 4K at max details (or close to max details, sometimes I might have to turn AA down depending on the implementation used), and by the time it doesn't, there will be faster cards that do.Opencg - Wednesday, July 3, 2019 - link
the 1080ti will go down in history as one of the longest lived cards ever. since the 2080 ti is not a significant step up especially when you consider price. the 1080ti basically spans two generations of dominancekaesden - Wednesday, July 3, 2019 - link
the 1080ti is the 2500k of video cards.ludicrousByte - Thursday, July 11, 2019 - link
^ This :)coolkev99 - Tuesday, August 6, 2019 - link
I'm STILL running my 2500k. :-oGastec - Wednesday, July 17, 2019 - link
Because IT IS a high price for what it gives you.Opencg - Wednesday, July 3, 2019 - link
I just cannot back nvlink / sli. it is proven that developer and nvidia support fall short far too often. Sure you can run 4k 120fps most of the time but its not supported well enough across all titles. and issues like frame timing and imput lag are introduced. sli was good in older apis when it had the option for driver based sli aa. but newer apis require custom aa implementation and nobody has taken the time to make this tech work properly.Qasar - Wednesday, July 3, 2019 - link
Opencg you referring to SLI from 3dfx ?? i wonder how that version of SLI would work now a days.Opencg - Thursday, July 4, 2019 - link
Im mainly refering to how sli worked with older versions of direct x. In older versions the driver could force aa modes. With newer versions aa modes need to be programed in by the developer. For sli aa each card would render half of the samples for a given frame. The load on each card was almost the same. With alternating frame rendering each card renders every other frame and the load can vary from frame to frame. In the end it increases input lag or makes frame pacing less accurate or a little of both.Qasar - Friday, July 5, 2019 - link
ahh you're talking about nvidia's SLI.. not 3dfx's :-)Opencg - Saturday, July 6, 2019 - link
nvlink is sli. they changed the name and upgraded the bridge but the tech is functionally the same and has the same issues. (actually the issues are getting worse due to sli becoming less common and developers spending less time on it)Qasar - Saturday, July 6, 2019 - link
yep.. but 3dfx sli, and nvidia sli.. only share the abbreviation, nothing else... 3dfx SLI, Scan Line Interweave, nvidia sli Scalable link interface, 3dfx's version.. no need to create profiles or anything.. add the 2nd card.. see a performance boost right away... too bad that way of combining cards, doesnt work now a days...nathanddrews - Wednesday, July 3, 2019 - link
"NVLink"Good one! That had me laughing to myself for a good 20 seconds.
Gastec - Wednesday, July 17, 2019 - link
SHILL ALERT!tamalero - Wednesday, July 3, 2019 - link
You mean the 1080ti right? the 2080 >= 1080TI, not vanilla 1080. Therefore 2070 super is closer to 1080TI territory.Samus - Wednesday, July 3, 2019 - link
The catch is on the used market you can pickup 1080 (non-Ti) for <$300. I recently picked one up for $250 off Facebook Market, and I've seen 1070Ti's for $200.It's mostly people dumping and buying into the ray tracing train. And even with these cards, as fast as they are, you still need to spend a LOT (hundreds more) for a card not significantly more powerful than a 1080.
Opencg - Wednesday, July 3, 2019 - link
i meant the 2080 since that is the performance target of the 2070 super. stock to stock is close if you compare to a 2080 with 1710mhz boost. but many are clocked 1800 or higher and the overclocking headroom will still be much better on the old 2080 vs the 2070 super. its the same chip and you run into the frequency wall at the same range. the 2070 super just has less cores.Gastec - Wednesday, July 17, 2019 - link
King of value my ass! The GTX 1080 Ti has always been too expensive and these RTX cards are just obscene. But that's what happens when they trick you with anchoring.If you don't understand, go watch "Let’s go whaling: Tricks for monetising mobile game players with free-to-play" video on YouTube and skip to 12:27
bchiemara - Friday, March 6, 2020 - link
I bought the 2070 super for the ray tracing hardware, which the 1080 Ti does not have and doing ray tracing in software the 1080 Ti can't compete with the 2060 let alone the 2070 or the Super cards.techxx - Tuesday, July 2, 2019 - link
Yup. RTX cards are still priced as if they launched 3 years ago. Pathetic Nvidia and even more pathetic consumers who give them a dime. Nvidia completely destroyed the GPU market where only fools buy into it.PeachNCream - Tuesday, July 9, 2019 - link
As someone that plays computer games just fine on crappy, slow iGPUs like Bay Trail graphics or a Radeon HD 6310, I can safely say that you can waste just as many hours of your life rotting away behind slow, cheap hardware if you're the slightest bit selective about the titles you pick to play on your hardware so yes, there is literally zero reason to give a flying you-know-what about what graphics card does what or even really care overly much about the sort of computer you currently own as long as the stupid thing boots up and all its buttons work.Qasar - Wednesday, July 10, 2019 - link
the games i play.. wouldn't work as well on vid cards like the ones you use Peach :-) to graphically intensive, and when your turn the eye candy down.. kind of looks like games from the mid 90s in DOS....Questor - Friday, July 5, 2019 - link
Exactly!Santoval - Wednesday, July 3, 2019 - link
Branding matters significantly. *You* might buy RTX 2080/2070 Trash, but most people will definitely not. You might not care but Nvidia surely does.Kevin G - Tuesday, July 2, 2019 - link
Just wait for the Super Turbo Hyper Championship Edition.Dug - Tuesday, July 2, 2019 - link
You forgot to add an X or II at the end.philehidiot - Tuesday, July 2, 2019 - link
Did they steal Gillette's marketing people? This is soon to be the 2070 Fusion Power Stealth Back Edition. It'll vibrate in your case to make your PCI-E pins rise up to be annihilated by the supreme power of this card.Gastec - Wednesday, July 17, 2019 - link
Turbo Lubricated Pro Slider perfect for Anal Ravaging!boozed - Tuesday, July 2, 2019 - link
I'll wait a little longer for the Ultra, which comes after the MegaPeachNCream - Tuesday, July 2, 2019 - link
Well-placed Street Fighter reference detected!Smell This - Wednesday, July 3, 2019 - link
You forgot the **Duper** ...Super **Duper** Turbo Hyper Championship Edition ;-)
Good luck competing in the 545mm2 TU104 "Die Per Wafer Category" (best guess: 95 (!) per 300mm). Navi 10? 226
I'm smelling "Offers You Cannot Refuse" combo deals on the Radeon RX 5700 series and the AMD Ryzen 3000s __ right off the bat. It's Jerry Sanders golden anniversary, after all ...
sing_electric - Tuesday, July 2, 2019 - link
YES. I don't necessarily mind pointless numbering schemes (2070), but I really just wish they'd use all the digits - call them the '2075' or something. If you have the same number of CUDA cores but a different clock, THEN maybe add a suffix (like Ti).Peter2k - Tuesday, July 2, 2019 - link
If I learned anything from marketing people, it should've been a 2079.99, using you're example :-)Gastec - Wednesday, July 17, 2019 - link
Soon ;)MadManMark - Tuesday, July 2, 2019 - link
First, I don't know why anyone even cares what the name is, and frankly it's not like "2080 RTX" or "5700 XT" is hardly a "fantastically inspiring" name either. Seriously, if that's all you can find to complain about, then you must really REALLY like these cards?Second, even when I force myself to consider your point seriously, I still don't get it. "Super" is directly from Latin, and literally means "above" or "beyond" or "in additon." How is this not an appropriate description of the way these cards alter their previous namesakes, exactly? What woul YOU call it, oh wise one Pino?
Peter2k - Tuesday, July 2, 2019 - link
K, errPeople are used to number monikers I'm guessing
Like 780, 980, 1080, 2080
The thing to keep in mind is that partner cards are already using long and stupid names, adding OG several X's, Extreme, II, ...
They get really long names, throwing a super in there is not going to be that easy for people not tech savvy
Also regarding the Latin "super", but in today's world wouldn't it have been better to use:
Legendary
Epic
Ultra rare
And so on?
Peter2k - Tuesday, July 2, 2019 - link
Should also be colored accordingly to rarity, err performance lvlPurple
Orange
And so on
Gastec - Wednesday, July 17, 2019 - link
PINK! According to trends ;)Orange_Swan - Tuesday, July 2, 2019 - link
I think the worse named one I have seen was theAORUS GeForce RTX™ 2080 Ti XTREME WATERFORCE WB 11G
tamalero - Wednesday, July 3, 2019 - link
Problem with the numericals is thtat Nvidia shat on that with the 2000 series.the XX70X segment was shifted up and replaced the XX80 slot in price.
So 2070 replaced the 1080, the 2080 replaced the TI, and the 2080TI replaced the TITAN slots of the older generations.
If you take the models out and check just the price point, the performance boost is minimal in that generation.
Pino - Tuesday, July 2, 2019 - link
Jezzz, some serious fan boys here. Get a life dude! I never said it's a bad product, I have a RTX 2080 btw and love it.And the bad marketing remarks goes for both Nvidia and AMD, they both suck.
They should just make it easier for the average Joe who wants to enjoy some gaming.
There is no easy way for an average consumer to get a VGA without reading a bunch of articles trying to figure out the difference between GTX 1660, GTX 1660TI, RTX 2060, RTX 2060 super.
It could be as easy as RTX 2083, RTX 2085, RTX 2087 instead of RTX 2080, RTX 2080 super and RTX 2080 TI
Threska - Tuesday, July 2, 2019 - link
Any votes for "Super Expensive" which will certainly ease the "who cares what it's called" meme.philehidiot - Tuesday, July 2, 2019 - link
I'd take the cooler off, slowly, bit by bit, screw by screw, fan blade by fan blade and call it the XXX edition. It'll be fucked after that so seems appropriate.twtech - Tuesday, July 2, 2019 - link
They should have brought back the "Ultra" naming.boozed - Tuesday, July 2, 2019 - link
The word "Super" reminds me of Edward Teller.29a - Tuesday, July 2, 2019 - link
Super describes how sore my ass is having bought an OG 2070.AmiableChief - Wednesday, July 3, 2019 - link
I think Nvidia just reinforced my belief in 'never buy 1st gen products'. RTX clearly was an experiment on Nvidia's part because a lack of competition can make you carefree like that.Questor - Friday, July 5, 2019 - link
You got rear-ended by Nvidia. Didn't see that coming after the last couple of Titan- Ti debacles? Forest for the trees my friend.tamalero - Wednesday, July 3, 2019 - link
Its not like they have much choice. The AIBS are already using most of the "cool" sounding names to name their card families.PlasticMouse - Wednesday, July 3, 2019 - link
Personally, while I don't necessarily like the "Super" moniker, I do appreciate that nVidia is at least keeping the RTX 2XXX name. They could have called it a RTX 3XXX instead, in which case, it would be somewhat misleading since the underpinning technology and even the GPU die is from the 2XXX series.Rudde - Thursday, July 4, 2019 - link
They could have called them RTX 21X0 instead of RTX 20X0 Super, but then again, that might undermine their existing offerings (2060 and 2080ti).phoenix_rizzen - Friday, July 5, 2019 - link
Straight numbers would have been simpler, although the Ti variant adds some confusion.2060 --> 2065
2070 --> 2075 --> 2070 Ti
2080 --> 2085 --> 2080 Ti
Creig - Tuesday, July 2, 2019 - link
Why did you test your Nvidia cards against a two year old Vega 64 instead of the new Radeon VII?Ryan Smith - Tuesday, July 2, 2019 - link
Truthfully, it's what I had on hand. I was preparing for the RX 5700 review when NV interjected with the RTX 20 series Super launch. Plus the Radeon VII is a $699 video card, so it's not really a competitor to the $399/$499 Super cards.Karmena - Tuesday, July 2, 2019 - link
and RTX2080 is in what price range?drexnx - Tuesday, July 2, 2019 - link
it makes more sense though because it's the same silicon as the 2070S being tested heresing_electric - Tuesday, July 2, 2019 - link
Besides, the real test comes when the embargo is lifted on the RX 5700 series, since that's what people will be cross-shopping (well, for those of us that actually do that, rather than getting corporate logos tattooed on our bodies).psychobriggsy - Tuesday, July 2, 2019 - link
I wonder if AMD will do anything regarding VII pricing, as $699 is clearly not viable against the 2080 Super later this month. $599 would be more suitable, but maybe they'll go for $649 because of the 16GB of memory.Yojimbo - Tuesday, July 2, 2019 - link
The Radeon VII has 16GB of HBM2 and is built on an expensive process. I doubt they can cut the price much. They never intended it to be a high volume part. It will probably sell as a poor man's version of NVIDIA's Titan. The fact that they are selling the Radeon VII at all means that their Radeon Instinct MI50 isn't in much demand.V900 - Tuesday, July 2, 2019 - link
I reckon they’ll keep the price on the VII where it is.While it’s a card that can’t compete with Nvidia’s cards on neither performance nor price whereas gaming is concerned, it still has the edge in compute.
I reckon most of the folks who get one, need it for compute performance and don’t want to spend thousands of dollars for a Quadro or Pro AMD card.
Questor - Friday, July 5, 2019 - link
"The fact that they are selling the Radeon VII at all means that their Radeon Instinct MI50 isn't in much demand."Not likely the case. AMD had to have something, a market presence in the segment until Navi. Especially since Navi will be released slowly and Nvidia already had a new part out.
Dribble - Tuesday, July 2, 2019 - link
It's not going to sell as a gaming card. I am sure there will be a few that work out a way to use the compute and for them it's worth the $700.maroon1 - Tuesday, July 2, 2019 - link
The other reviews show RTX 2070 Super matching radeon VIIAMD should drop the price of radeon VII
eva02langley - Tuesday, July 2, 2019 - link
They should fade them out. Big Navi is coming in half a year and they are no more making sense price wise.Phynaz - Tuesday, July 2, 2019 - link
Navi isn’t going to helptamalero - Wednesday, July 3, 2019 - link
different markets. supposedly VEGA is a compute strong card vs a pure gaming card of most of Nvidia lineup.imaskar - Tuesday, July 9, 2019 - link
Compute strong card without CUDA, which most of compute software relies on. Cool.sing_electric - Tuesday, July 2, 2019 - link
You might be right, but AMD's got a limited ability to lower the price on any of its Vega-based GPUs, partly because the HBMII memory on it is ridiculously expensive, and partly because the sheer wattage of these cards and chips means that they need to have pretty beefy card/cooling designs, etc.That's why we never really saw great deals on the Vega 56/64 even after the RTX cards came out with better performance/$ (or /w) or most consumer applications.
Meteor2 - Saturday, July 6, 2019 - link
And because cryptominers bought all of them.Dark42 - Tuesday, July 2, 2019 - link
Much more important are the prices for the 5700 (XT). If AMD's Computex performance figures are correct, we now have the situation:5700 XT at 449$ is ~5-10% faster then the 2060 Super at 399$.
5700 at 379$ beats is ~10-15% faster then the 2060 at 349$.
Also there is the game bundle situation in favor of nvidia.
With these prices, the 5700 makes no sense - for just 20$ more you get a much better 2060 Super.
Similar for the 5700 XT: 50$ more for just 5-10% is too much.
AMD must lower their prices, the question is by how much?
If AMD brings the 5700 XT down to 399$ and 5700 to 349$ then Nvidia is in a world of hurt.
Nvidia can't lower their prices too much because their chips are big and expensive and can't react with new chips anytime soon.
While AMD has room for a price war with the small 7nm chips and more Navi variants on the horizon.
The_Assimilator - Tuesday, July 2, 2019 - link
> Nvidia can't lower their prices too much because their chips are big and expensiveNVIDIA can lower their prices all they want because they've got cash in the bank. But they won't, firstly because they just did, and secondly because they already have the market sewn up. Even if Navi does undercut Turing pricing, the former still has to overcome the market dominance of the latter (and Pascal).
Meteor2 - Saturday, July 6, 2019 - link
This. The 5700 line is dead without a price-cut, immediately.Gastec - Wednesday, July 17, 2019 - link
"Nvidia can't lower their prices too much because their chips are big and expensive" . You seem to know quite a lot about how much money Nvidia spends on making their products. WikiLeaks or pure divine inspiration?just4U - Friday, July 5, 2019 - link
Amd won't drop the price on the Vega VII, it keeps selling out.. limited supplies or super (heh..) popular?Orange_Swan - Tuesday, July 2, 2019 - link
Hexus have got a review with both the Radeon VII and the Vega 64.Orange_Swan - Tuesday, July 2, 2019 - link
from their review, it seems to be between the 2060 FE and the 2060 Super FE, if you HAD to get a GPU right now, rather than weighting for the new AMD releases, it would be a no brainer to go for the 2060 rather than the vega 64 as the only real advantage the 64 has is more memory, while the 2060 is cooler, quieter and uses less power.V900 - Thursday, July 4, 2019 - link
The 2060Super is definitely worth the 30-50$ more it costs compared to the 2060.You get way higher performance in both frame rates and RTX.
You get
Vitor - Tuesday, July 2, 2019 - link
Crazy how 4k/60fps is still a dream even for a great gpu. Oh well, joy and fun still can be had in 1440 or 1080.Toss3 - Tuesday, July 2, 2019 - link
That only if you insist on having everything on ultra (which isn't the best option considering the differences between high and ultra are so small you wouldn't know the difference).Toss3 - Tuesday, July 2, 2019 - link
That'sGastec - Wednesday, July 17, 2019 - link
You know that you CAN tweak graphics settings in video games, you don't just have to choose between generic terms like "Ultra" and "High"Robs2010M6S - Tuesday, July 2, 2019 - link
That isn't remotely true.. I have been gaming at native 4k now since 2017 with a 1080ti, 5ghz 8700k and 16GB DDR4 3600mhz ram and today I can still crank out 60fps at 4k with the latest and greatest titles. The reason no one thinks its possible is because they are stubborn hard heads and think that every single last setting has to be cranked to full ultra or they must use absurd levels of AA which is completely stupid at 4k to start with... A lot of modern games have a few settings you can tweak which rape performance for very little IF anything in return visually over their lower settings such as high or very high instead of ultra and when you take the time to find these settings 60fps at 4k with a decent rig isnt hard to achieve at all.Dug - Tuesday, July 2, 2019 - link
I would love to know settings on these games to get constant 60fps at 4k. The one's benchmarked here I have issues with at 4k, even when lowering settings.I'm not saying you can't get there, avg fps seem to dictate you can, but there are just too many dips below 60 that ruin the experience.
1440 seems like the sweet spot.
JoeyJoJo123 - Tuesday, July 2, 2019 - link
Thanks for breaking the pervasive stupidity on 4k60 being impossible. I, too, have been playing latest gen titles, just removing or disabling the awful settings (motion blur, godrays, etc), and using medium settings on the two big performance hitters (shadows and AA), and the game still looks 90% as good with 50~60% more framerate.Icehawk - Tuesday, July 2, 2019 - link
60 FPS just isn't needed by everyone, I'm happy with mid 40s if it doesn't dip hard so my 970 can handle 1440p and even 4k on some games. Some folks are more sensitive than others.Gastec - Wednesday, July 17, 2019 - link
I play at 10 fps, in the sand, with my winny!eva02langley - Tuesday, July 2, 2019 - link
Mid range, don`t lure yourself, this is midrange.V900 - Thursday, July 4, 2019 - link
To be honest, I don’t really care that much about 4K/60 hz.The difference over 2K/1440p is neglible (especially if you’re not immediately next to the monitor.)
Having a raytraced 1440p picture makes a bigger difference, and hopefully the industry is moving towards that instead of ever higher frame rates and ever higher resolutions.
TristanSDX - Tuesday, July 2, 2019 - link
Great rebrand, with additional free 15-20% more perfThankfully NV is serious company
V900 - Tuesday, July 2, 2019 - link
And so far ahead it isn’t even funnyYeah, sure... Big Navi is coming next year. But so is Nvidia’s Ampere GPUs AND their first cards on a 7mm node.
I reckon just a node shrink by itself would be enough to keep up with Big Navi.
Questor - Friday, July 5, 2019 - link
Blind.Gastec - Wednesday, July 17, 2019 - link
Shill!Cellar Door - Tuesday, July 2, 2019 - link
So the low-end midrange is now $399? Another $50 added just like that. Nvidia seeing how much they can push the average customer?Are they that confident that the 5700xt will be a flop?
jordanclock - Tuesday, July 2, 2019 - link
The 2060 isn't going anywhere, so it is still $350 to get into the lowest RTX card.sing_electric - Tuesday, July 2, 2019 - link
Fair, but the benchmarks REALLY suggest that you should spend the extra $50 if you can afford it, since you get more, faster (well, wider) RAM on top of the CUDA core increase. Still, the bigger point is that $350-400 used to get you a 'mid range' GPU in a given series, and now it's the "entry level."Yojimbo - Tuesday, July 2, 2019 - link
It's not entry level. Don't get caught up in naming schemes. Besides, the 16 series exists.Cellar Door - Tuesday, July 2, 2019 - link
16 series is low-end. Take a look at the current nvidia product stack. The 60-series has always been lower mid range.Yojimbo - Tuesday, July 2, 2019 - link
1660 Ti performs about as well as a 1070 and launched at a price a little more than the 1060 did on its launch. How is that low-end?Korguz - Tuesday, July 2, 2019 - link
because you have 2 or 3 tiers of cards above it ??Yojimbo - Wednesday, July 3, 2019 - link
That doesn't make it low end... Those tiers above it are very expensive.Meteor2 - Saturday, July 6, 2019 - link
Precisely.Questor - Friday, July 5, 2019 - link
Which is still too much money.V900 - Tuesday, July 2, 2019 - link
Nope. Low end midrange is the 2060 which is still 349.The $399 2060S is the old 2070 card and is definitely not “low-end midrange.”
Besides, you can’t expect prices to never change. Chips are bigger than ever, and the nodes producing them are only going to get more expensive.
We’re brushing up against the laws of physics, so of course prices are going to creep upwards.
Meteor2 - Saturday, July 6, 2019 - link
Who said $399 is low-mid? Labels like low, mid and [spits] enthusiast are daft. Only FPS/$ matters.Kevin G - Tuesday, July 2, 2019 - link
I'd be foolish to jump on any of these cards right now until after the RX 5700 reviews hit. But saying that, the RTX 2060 Super does look to be one very attractive card now. The rest of the line up feels like this is where nVidia should have originally positioned the RTX line up nearly a year ago as it would have given Pascal owners more of a reason to upgrade.This does make me wonder how much longer until nVidia will have their 7 nm chips ready. If they were due at the end of the year, why not just do a small price cut if the Radeon RX line up is competitive and wait it out? If nVidia's 7 nm chips are further out, this refresh makes far more sense but has me scratching my head as to what nVidia's hold up could be. If those 7 nm chips arrive in 2020, then AMD will have had 7 nm products out on the market (though for data centers) for a full year ahead of nVidia which again seems to be weird.
sing_electric - Tuesday, July 2, 2019 - link
To be fair, you can't get the Nvidia Super cards until 7/9, after AMD's cards are out.When Nvidia launched the RTX series, I thought that they had to be pretty confident in their design to be doing it on 12nm. They probably got great yields from day 1, and I'm really surprised that they weren't able to meet demand from the time they launched.
Peter2k - Tuesday, July 2, 2019 - link
Probably the sheer size of the chips itself, thx to the RTX partsWould've been interesting to see a GTX 2080
Yojimbo - Tuesday, July 2, 2019 - link
The 12FFN process is mature and the yields are good, but the RTX cards have large die sizes because of the features they have. I think prices do tend to go down even 2 or 3 years after a node comes out. Also, I'm willing to bet that GDDR6 prices are lower now than 9 months ago.I'm pretty sure NVIDIA won't be shipping any 7 nm parts in an significant volume until the second half of 2020. At that time NVIDIA needs to deliver its next generation data center GPU for the Perlmutter supercomputer. I guess they will also launch gaming GPUs because September or October of 2020 would be around the right time for it.
As far as NVIDIA's hold up, perhaps it's the current cost of the 7 nm node. AMD has no choice but to go to 7 nm. They need the power efficiency that the new node offers and they are probably willing to pay more per transistor to get it. NVIDIA doesn't need the power efficiency at the moment, so they are more willing to keep their costs down.
V900 - Tuesday, July 2, 2019 - link
Nvidia just taped out their first 7nm design, so we can expect it in about a year. :)It’ll be interesting to see how much they’ll be able to get out of the double whammy of new architecture and new node.
And AFAIK they’ll use a denser 7nm node than AMD.
Meteor2 - Saturday, July 6, 2019 - link
I quite agree, Yojimbo. 7nm or any other process step is a means to an ends, not an end in itself. Nvidia just don't need it; AMD do.Gastec - Wednesday, July 17, 2019 - link
GDDR6 prices would be lower unless there's a "power outage" every year in July.V900 - Tuesday, July 2, 2019 - link
They just taped out their first 7nm design, so it’s about a year away. :)Korguz - Tuesday, July 2, 2019 - link
oh ??? says who ??V900 - Tuesday, July 2, 2019 - link
Reliable source. Can’t remember his name rn, but one of those industry insider honchos on Twitter.Read it over on Beyond3D forum.
Korguz - Tuesday, July 2, 2019 - link
um yea ok.. sure...eva02langley - Tuesday, July 2, 2019 - link
Well, AMD will introduce bundles, games and rebates for Ryzen and Navi launch. I will not be surprised to see Navi cut down by 50$ in August.Fritzkier - Tuesday, July 2, 2019 - link
Navi should be cheaper IMO. Navi has way more smaller die (2.5x more smaller than Vega 64 if I recall) and uses GDDR6 instead of HBM. I don't know why AMD priced them that high tho...eva02langley - Tuesday, July 2, 2019 - link
Because Nvidia was asking for 500-600$ for less performances.Meteor2 - Saturday, July 6, 2019 - link
*was*edzieba - Tuesday, July 2, 2019 - link
"the performance, partially a consequence of going with 12nm, just isn’t there"People should have been weaned off this by now: process shrinks stopped inherently boosting performance years ago. Power consumption drops and perf/watt increases, but 'perf/transistor' continues to decrease (due to leakage increasing as packing density grows, coupled with power density increases) as it has done for some time, and cost/transistor has been going up since 28nm. A brief period of making dies bigger and bigger (and more and more expensive) has culminated in reticle-limit dies like GV100 and TU102, but that now makes start the wall process scaling hit some time ago in reality.
This is only going to continue as processes shrink further. Cost/transistor will rise, perf/transistor will drop, and increasing performance means dies will continue to grow. Performance gains will continue to come from architectural changes, not process changes. Unless you're hitting the reticle limit AND cannot split your die into multiple dies due to latency reasons only then does it make any sense to move to a smaller process, and you will take a hit to both cost/perf as well as perf/transistor in doing so which may eat any gains from packing more transistors in.
Threska - Tuesday, July 2, 2019 - link
Chiplets.Hixbot - Thursday, July 4, 2019 - link
I agree that performance per transistor can drop with die sizes due to leakage. And that new nodes are expensive at first. But once a node is mature, the cost per transistor should be lower than previous node. If that wasn't the case than the semi conductor business would be completely sunk.Hixbot - Thursday, July 4, 2019 - link
Edit: I ageee that performance per transistor can drop with process shrinksGastec - Wednesday, July 17, 2019 - link
Changes and improvements in software need to be done which are going very slowly because it takes a lot of work(coding) and knowledge but mostly hard work which younger generations are not willing to do (distracted by social networking and gaming).eva02langley - Tuesday, July 2, 2019 - link
At this point, until a GPU overhaul is made like Zen was for the CPU space, we are not going to see much changes. AMD will need to introduce chiplet design gpus for things to really change.sing_electric - Tuesday, July 2, 2019 - link
How would a chiplet design really help GPUs, though? You don't have nearly the kind of I/O requirements on-chip for a GPU as you do a CPU. You're already massively parallel, and you can just shut off defective "cores" and put those parts in a lower bin; going to a 2 (or more) chip setup would probably just hurt performance.eva02langley - Tuesday, July 2, 2019 - link
By making smaller chips, better yield, better binning, reducing waste... on multiple product.It is like designing a frame for multiple car model assembly. You are cutting cost and end up with better quality products...
Why chiplets? It is obvious as a business strategy.
jordanclock - Tuesday, July 2, 2019 - link
If it's obvious, then I'm sure AMD has considered it and decided it doesn't make sense.Why does everyone in the comments act like they know better than entire teams of the world's best engineers?
Kevin G - Tuesday, July 2, 2019 - link
Chiplets require more expensive packaging and to really scale, you have to design with that concept in mind. Previously the cost-benefit from an engineering standpoint was simply to eat the cost going with larger dies and release harvested products due to the rarity of fully functional chips. The costs to migrate to newer nodes is increasing and the necessity of using multi-patterning have put tighter limits on how big chips can be. Chiplets are the way forward as they solve the current issues in manufacturing and the packaging doesn't carry the same premium as before.Gastec - Wednesday, July 17, 2019 - link
Chiplets and multi-GPU configs FTW! Also convince the electric companies to reduce the electricity bill by 50% :)Yojimbo - Tuesday, July 2, 2019 - link
I don't think a chiplet would help a consumer GPU much. Chiplets allow you to put more transistors than you otherwise would be able to. But consumer GPUs don't reach the reticle limit. More transistors would also increase the price of the GPUs. The cost per transistor isn't going down as much from node to node as it used to. So if AMD tried to boost performance by building a big chip with multiple chiplets it would be very expensive not only because of the complexity of chiplet technology but also because of the cost of all those transistors.What AMD needs to do is to continue to modify their architecture. The RDNA is a good first step. It's still behind NVIDIA in memory bandwidth, energy, and workload efficiency, but it looks like it makes a good jump over GCN in those areas.
eva02langley - Tuesday, July 2, 2019 - link
Seriously... everything you are saying is only your opinion. Chiplet is bringing better yield, better binning, reduce waste and end up giving more margins due to modularity.As of now, we have no clue what RDNA can provide, the reviews are not even out yet and it is the first kick at the can. If all the game industry is backing up AMD, guess what, there must be a reason for it.
Yojimbo - Wednesday, July 3, 2019 - link
Chiplets bring better yields, yes, but at the expense of worse power efficiency. If you put the chiplets on silicon interposers it's very expensive, too. So you must put them on less expensive subtrate and that affects the communication speed. Maybe when the technology is more mature it could make sense, but there's no sense to introduce the complexity now. In any case, AMD fans talk about chiplets as if AMD is the only one pursuing them. The whole industry is. Intel, AMD, NVIDIA,TSMC, everybody.Why do you say the game industry is "backing up AMD"? What does that mean?
Meteor2 - Saturday, July 6, 2019 - link
Everything Yojimbo says is fact; eva02 you're clearly very poorly informed.Korguz - Saturday, July 6, 2019 - link
everything thing yojimbo says is fact ?? yea right... thats a laughKevin G - Wednesday, July 3, 2019 - link
Umm... the TU102 dies is 754 mm^2. That is highend fair enough but it is near the limit with only the 818 mm^2 of the GV100 being larger.For consumer parts though? nVidia has routinely used chips larger than 500 mm^2 for the high end products but now a chip that large is being fitted into the RTX 2070 Super. That is still a very big chip that is difficult to yield fully functional units.
As prices for new process nodes goes up, chiplets do offer start to offer advantages in prices as they don't have to be manufactured on cutting edge nodes: if moving 5 nm in the future is not immediately cost effective, wait it out and simply add more 7 nm chiplets for a product refresh. The big advantage new nodes will be bringing is lower power consumption which limits pretty much everything. However, scaling back on clocks and voltage does permit significant power reductions and thus more dies for a given power budget.
eastcoast_pete - Tuesday, July 2, 2019 - link
The big (potential) upside of chiplets is, of course, the ability to make cards of various capabilities simply by connecting more chiplets to the interconnect. Also, there is an inherent upside to making several, smaller building blocks (chiplets) than one large die: you don't have to throw an entire huge die with many billions of transistors out just because of one or two significant defects. Much cheaper. The big challenge is the interconnect fabric: whoever is the first to get that right has a huge advantage.Without any insider knowledge of what's going on at NVIDIA, I would be amazed if they aren't working on their own interconnect and chiplet approach with great intensity.
eva02langley - Tuesday, July 2, 2019 - link
AMD is right now on the third iteration of Infinity Fabric. It is obvious that it is coming in the near future.Threska - Tuesday, July 2, 2019 - link
I'd say both Intel and Nvidia because all three are facing the same challenges as things get smaller.Yojimbo - Wednesday, July 3, 2019 - link
You say much cheaper.. the dies are cheaper, yes. But I don't believe the entire package is cheaper. Not currently, anyway, and they seemingly won't be cheaper if they are build on a silicon interposer so they must be built on other substrate.NVIDIA are working on their own interconnect and chiplet and have been for over 5 years or so, just like everyone else. That doesn't mean they are going to put it on a consumer GPU any time soon. You need the die costs to be pretty high, the interconnect to be pretty energy efficient, and the substrate cost to be low for it to make sense to add such complexity to something that could otherwise be made with one die.
soliloquist - Tuesday, July 2, 2019 - link
Nvidia lowering prices to steal AMD's thunder at launch is nothing new.But I think that it is telling that Nvidia had to crank up TDP and is willing to bundle games to do it. Time will tell exactly how these cards stack up, but it seems very clear that AMD has put the screws to Nvidia to produce more value in the market.
Meteor2 - Saturday, July 6, 2019 - link
Yes, AMD's main achievement is keeping Nvidia honest. But only just.Maxiking - Tuesday, July 2, 2019 - link
RIP AMD again.SD777 - Tuesday, July 2, 2019 - link
Why didn't the article include the specs/comps for the 2080ti? I would have liked to see the 2080ti on those charts so I could compare it. Might have been nice to include some of the 10xx generation just for comparison, but I can understand why those wouldn't be relevant.BenSkywalker - Tuesday, July 2, 2019 - link
Running Metro on the ultra setting using an RTX card is painful, RTX mode is *faster* and looks *way* better, why did you do this?Yojimbo - Tuesday, July 2, 2019 - link
They didn't run anything with RTX on, probably because AMD cards don't have DXR drivers, yet, so there is no competition to compare it to. It would take a subjective judgment to say "using XYZ setting with RTX on looks better and runs faster, let's take a look at those numbers." There is a place in the games/hardware enthusiast sphere for some analysis along those lines, but Anandtech doesn't seem to try to fill that niche.BenSkywalker - Tuesday, July 2, 2019 - link
Oh, I must have been confused, I thought this was a review for new nVidia parts, I guess you are saying it's a feels feels article for people who own the lone four year old AMD part.......?Yojimbo - Tuesday, July 2, 2019 - link
Oh, I must have been confused. I thought I was talking to a reasonable person. Guess not, so I won't bother.BenSkywalker - Tuesday, July 2, 2019 - link
Use lower quality settings that are markedly slower, ignore both a significant portion of the die space and the relevant performance implications of said die space and purposefully avoid any RTX benches of an RTX card for an RTX review.That is what you are defending. Reasonable, if you think so :)
Lord of the Bored - Tuesday, July 2, 2019 - link
Benchmarks that can't be run on a wide variety of hardware are meaningless.If nVidia wants to include special features that only RTX cards can utilize, that's fine.
But there's no sense including them in a general-use benchmark, because there's only a tiny handful of cards that can use them, and they're all the same chip anyways.
I'm sure nVidia will be glad to tell you how many RTX Ops(bungholiomarks, whatever) the new cards get. It will remain a meaningless number.
Also, the performance implications of the die space are "lost shaders".
nVidia put this in solely for the compute market and THEN turnd around to try and figure out a gimmick they could sell it to gamers with to obscure the meaningful performance loss.
BenSkywalker - Tuesday, July 2, 2019 - link
Ray tracing is a feature of DirectX 12. Metro exodus is using the DirectX 12 implementation for ray tracing.DXR runs on non RTX cards just fine, simply requires driver support for a DirectX feature.
The ray tracing cores don't move the needle for general compute at all, don't know what helmet head told you that but they have no clue what they are talking about. The tensor cores, otoh, those are very useful for certain compute tasks, they are only used for denoising on the ray tracing side, they aren't the intersection compute units, the feature they bring to the table is DLSS and we can all ignore that forever and that's fine.
The key new feature for the next gen consoles, the big feature every engine developer is pushing for, what had been considered the holy grail of real time graphics for decades, that I don't get why you would ignore in an article with RTX as the subject.
Meteor2 - Saturday, July 6, 2019 - link
If ray-tracing is so amazing... Where's the support?Phynaz - Wednesday, July 3, 2019 - link
Benching lowest common denominator is meaningless. Sorry if AMD can’t keep up.eva02langley - Tuesday, July 2, 2019 - link
Man, what a nice fanboy we got...BenSkywalker - Tuesday, July 2, 2019 - link
I don't know about that, maybe Ryan just has no clue and that's why he rigged the test to make the singular legacy card look better?Can you imagine if they benched the Vega64 in a game supporting ASync compute and disabled it for their benches? This is obviously worse as this isn't just about how instructions are scheduled, but has a major impact on image quality while being faster, but you should give him the benefit of the doubt, ignorance or maybe even promotional considerations from other parties to the site is what caused what appears to be fanboy shilling.
Korguz - Tuesday, July 2, 2019 - link
which one eva02langley ?? i count 2 or 3 alone in these comments...Phynaz - Wednesday, July 3, 2019 - link
Right there ^Korguz - Wednesday, July 3, 2019 - link
and there ^ phynazRyan Smith - Wednesday, July 3, 2019 - link
"why did you do this?"Ben, it's not the answer you're going to like, but it's for apples-to-apples comparisons. I need to be able to compare cards from all vendors (including Intel, if necessary), all running the same features. This is the same benchmark suite you'll see again in 4 days, as all of this is standardized for future use.
BenSkywalker - Wednesday, July 3, 2019 - link
Then why bother reviewing these products? You are running a much lower quality setting that runs markedly slower to appease the vendor that refuses to support the feature in their drivers while simultaneously leaving out the singular performance characteristic that we couldn't figure out by looking at the specs.If AMD supported an OpenCL mode in say LuxMark that was both faster and higher accuracy but it wasn't supported by nVidia would you honestly not run it?
Also, you didn't even mention it. We ran lower quality settings that were slower on these card because AMD refuses to add driver support would have at least explained why you did it.
Furthermore, the amount of die space dedicated to this feature inr these parts to not see a singular benchmark? Why bother reviewing them at all?
Meteor2 - Saturday, July 6, 2019 - link
I'm glad Ryan does these reviews. I expect everyone who reads them does. Heck, you're reading it...catavalon21 - Sunday, July 21, 2019 - link
" I need to be able to compare cards from all vendors (including Intel, if necessary), all running the same features."What? The test includes a CUDA-only benchmark (which is fine), but how is that in any way aligned with your previous comment?
akyp - Tuesday, July 2, 2019 - link
Can we please have more cards in the benchmarks? Would like to know how the 2070S compares to 1080Ti, for example.Ryan Smith - Tuesday, July 2, 2019 - link
More cards will be going into Bench ahead of the RX 5700 launch. This review was very compressed for time due to everything else going on and the need to setup (and validate) a new GPU benchmark suite.Koenig168 - Tuesday, July 2, 2019 - link
You can look at the RTX 2080 review. The RTX 2080 is slightly faster than the GTX 1080Ti and the RTX 2070 Super is slightly slower than the RTX 2080. Therefore, the RTX 2070 Super should be around GTX 1080Ti performance.wolfwalker78 - Tuesday, July 2, 2019 - link
Would have been nice to see some GTX numbers in there for comparison, I can't be the only person still running a 1080 or something that is still at least semi competitive. Hell I've got a house full of 1060's on 1080p screens and haven't seen any reason to touch them yet. Also, F these prices. The new norm for GPU cost blows.imaheadcase - Tuesday, July 2, 2019 - link
Yes, yes you are the only one.Korguz - Tuesday, July 2, 2019 - link
um.. no hes not...catavalon21 - Tuesday, July 2, 2019 - link
um...I believe the sarcasm filter was high wide open...Meteor2 - Saturday, July 6, 2019 - link
I don't think we're going to see much progress with 1080p, not for a long time. We have 60fps and little sign of increasing graphics fidelity which is going to push that fps down on any hardware which currently achieves it.Dug - Tuesday, July 2, 2019 - link
Looks like my 1080ti will hold out another year. 2+ years seems like forever.eastcoast_pete - Tuesday, July 2, 2019 - link
@Ryan: Firstly, thanks for the quick review of these "S" cards by Nvidia. I have two questions about your description of the 2070s, you write "All told, NVIDIA has disabled 8 of TU104’s 48 SMs here, leaving a card with 40 SMs, or 2560 Turing CUDA cores." My questions are: Are those chips lower binned (partially defective) big Turings that are then "cut" down to exactly 40 SMs? And, regardless of the binning question, how does Nvidia disable SMs? Laser them out? Thanks for answering!Ryan Smith - Tuesday, July 2, 2019 - link
" Are those chips lower binned (partially defective) big Turings that are then "cut" down to exactly 40 SMs?"They don't have to be, but generally yes.
"And, regardless of the binning question, how does Nvidia disable SMs? Laser them out?"
Lasers and eFuses, as I understand it. Either way it's very much baked into the GPU itself.
eastcoast_pete - Tuesday, July 2, 2019 - link
Thanks Ryan! So, maybe they do have a bunch of lower binned Turings that needed a home (and a paying customer). Dating myself here, but, many, many years ago, NVIDIA had a GeForce card that could be made into a Quattro that cost 3x that by changing a connection with a soldering iron, and a very (!) steady hand. I never dared to try, as one wrong move with that soldering tip could trash the entire card, no repair possible.V900 - Tuesday, July 2, 2019 - link
Ah yes, you could do similar tricks with Pentium CPUs back in the day, or sometimes just by flicking a DIP switch to get a 600$ processor out of your 200$ processor.Overclocking actually made sense then.
Sadly, those days are long gone.
Koenig168 - Wednesday, July 3, 2019 - link
Not that long ago. The GTX 690 was launched only 7 years ago. I was tempted to try the Quadro mod for fun but eventually sold the card as is when I switched to Pascal.Kvaern1 - Tuesday, July 2, 2019 - link
Still only 8GB RAM on the 2080...I have a 1080 and running @ 3440*1440 I've never had to turn down graphics settings a single notch due to FPS issues but I've had to due to lack of VRAM on a few occasions...
Beaver M. - Tuesday, July 2, 2019 - link
8 GB is fine for the 2060. But not fine for the 2070 and 2080. They should both have at least 11 GB. And the 2080 Ti should have at least 14.I am happy for the people who bought a 1080 Ti. They have a card with a very long breath. I wish I would have been so smart and bought one when they were around $600 to $700.
solnyshok - Saturday, July 6, 2019 - link
I bought one, used, for $350 about 6 months ago.biodoc - Tuesday, July 2, 2019 - link
Is FAHbench part of the new benchmarks suite? It is of interest to those of us into scientific computing.Ryan Smith - Tuesday, July 2, 2019 - link
Is it not. It hasn't been updated in a couple of years now, so I've tossed it out.However if there's a newer, similar benchmark you'd like to see, then I'd be eager to hear it. The current GPU compute benchmark situation is rather ugly.
catavalon21 - Tuesday, July 2, 2019 - link
I was wondering the same thing. My GPUs spend as much time mapping the stars, looking for little green men, and working to cure disease as they do playing games. With this being one of the few sites that puts any compute benchmarks into GPU reviews (for which I am grateful), I would be fine with you keeping an older compute test or two. I'm not sure crunching for science (in whatever realm) sees apps change as often as games, and for those of us who do that, I don't think we'll complain. I understand every benchmark takes time to run, just know there are those of us who do look forward to F@H tests. Or, maybe I'm just stuck in the past, where for years, in EVERY new GPU review, I anxiously looked to see if THIS would be the card that could run Crysis :-)biodoc - Wednesday, July 3, 2019 - link
FAHBench is based on core21 which is still the current workhorse for molecular dynamics simulations at Folding@Home. There is a new core22 in development but it is still in beta testing. I do see links to Anandtech's FAHbench results in multiple forums including the folding forum at Stanford so it is still important to those of us that support science with our GPUs.Ryan Smith - Wednesday, July 3, 2019 - link
Very interesting. That's good to know! I was under the impression that the project had already discarded core21. FAHBench is easy enough to run, so that wouldn't be too hard to re-integrate.biodoc - Wednesday, July 3, 2019 - link
Thanks Ryan!Amoro - Tuesday, July 2, 2019 - link
So 215w TDP equals 302 watts normal gaming load. Right...Andrei Frumusanu - Tuesday, July 2, 2019 - link
The power measurements here are of the complete system, not just the GPU.Amoro - Tuesday, July 2, 2019 - link
My mistake. I take it back. I should read. It's actually really close then.Meteor2 - Saturday, July 6, 2019 - link
:-)dcole001 - Tuesday, July 2, 2019 - link
With the Benchmarks now finally out. Navi is DOA if they don't reduce pricing. Why would anyone buy 5700 XT that is going to have less performance and no hardware Raytrace Support. RTX 2070 Super is best value card out there. Got $100 price drop (Foundation Edition) and 15% boost in Performance. With very little overclocking you get the Performance of the RTX 2080 Standard Addition for $499!! Best Deal out there and it is the Founders Edition which has higher Quality GPU.Yojimbo - Tuesday, July 2, 2019 - link
Well, we haven't seen the Navi tests yet, but AMD will surely have to reduce pricing for it. That seems to be NVIDIA's thrust here. It's not the first time they've done it.V900 - Tuesday, July 2, 2019 - link
Doubt this release has anything to do with Navi.Nvidia has over 80% of the market, AMD isn’t really a concern. Heck, in the upper market level, it’s virtually a monopoly.
Nvidia’s problem is that Turing was “too good”, so many gamers out there are hanging on to their 1070’s and 1080’s and don’t see a reason to upgrade.
The Super RTX cards is aimed at those guys.
And I’m not so sure we will see a price cut for Navi.
A: They don’t have that much room to cut the price of Navi. Why risk canibalizing RX series sales?
B: At the current price, Navi gives them a fat, juicy margin. Lowering the price might move a few extra units, but hurt their total profits.
Yojimbo - Tuesday, July 2, 2019 - link
The timing shows it has a lot to do with Navi. NVIDIA only has a dominant market share as long as they defend it. Navi is AMD's most competitive product in a while, and it is currently targeting the most profitable part of the stack.We must see a price cut for Navi or they won't sell many and they will build up inventory. he other option is to stop making them, which doesn't do anything to reduce the fixed costs of r&d for the chip. AMD will try to maximize their profits/minimize their losses, or they might even try to gain some market share if the feel they are financially in good enough shape to do that. By RX sales do you mean Vega? There are hardly any Vega sales to begin with. They want Navi to be far more successful than Vega was. They will simply stop making Vega or take a write down on them if they have to in order to get Navi out the door.
At current price Navi gives them almost no profit because when you have minuscule sales but large r&d costs you can sell each one for $10,000 and still lose money. They must cut the price in order to turn any profit.
Meteor2 - Saturday, July 6, 2019 - link
Minimising losses may well be AMD's goal, I fear :-/scineram - Saturday, July 6, 2019 - link
All the development costs have already been accounted for in R&D spending. Vega has way too many OEM costumers, even if they can gamer cards, to cease production. Those have better margins.Korguz - Tuesday, July 2, 2019 - link
" Nvidia’s problem is that Turing was “too good”, so many gamers out there are hanging on to their 1070’s and 1080’s and don’t see a reason to upgrade. " heh.. i have a 1060, and i would need to go to at least a 2070, but at those prices.. its out of my price range...V900 - Wednesday, July 3, 2019 - link
What’s your budget?Because I have a feeling there’ll be quite a few uses 2060s and 2070s available soon. ;)
Anyways, the 2060S is available for 399$, which is a reasonable price considering it’s basically a 2070.
The 2070S is the old 2080, and would be a decent upgrade at a reasonable price. ($499 for a high end card is pretty standard pricing.)
Korguz - Wednesday, July 3, 2019 - link
around the $500 cdn mark.. and cause of that, and the part that the 2070's start at what looks like $680, it will be a while before my 1060 is upgraded. im not the only one either.. most of my friends would like to upgrade their cards too.. but the price of the 20 series.. is also too expensive for them as well...Meteor2 - Saturday, July 6, 2019 - link
The Super series is because the original 20x series wasn't a big enough improvement to justify upgrading, for sure.But the timing, within a week of RX 5700, is obviously deliberate.
I think we all agree that AMD will have to cut the prices of 5700s or they will sell very few. But they do not have big margins not on 7nm. Quite possibly they might not have the margins to make much of a price cut.
If the RX 5700 hadn't come along, we'd have been waiting much longer for the Super series.
Korguz - Saturday, July 6, 2019 - link
" The Super series is because the original 20x series wasn't a big enough improvement to justify upgrading, for sure " yes.. because nvidia, priced most of the 20 series out of most of its customers reach... and the super series.. is STILL out of most peoples price range VS what they have now, and there for, cor the price.. isnt worth it" But they do not have big margins not on 7nm. " and you read this where ? or is it just your own speculation ?
V900 - Tuesday, July 2, 2019 - link
Navi would have been pretty DOA irregardless of the Super cards.But nah, I don’t see them lowering the prices. Why would they?
Between the hardcore AMD fans who’d never get an Nvidia card regardless of performance, and the folks who need something good at compute at a reasonable price, they’ll move enough Navi cards to make a pretty penny.
Navi isn’t a market share play kind of card.
It’s a “gimme those 40% margins” kinda play.
Yojimbo - Tuesday, July 2, 2019 - link
I don't think you realize how much it costs to tape out a 7 nm chip let alone how much it costs to develop a new GPU architecture. There are no 40% margins without volume sales.Yojimbo - Tuesday, July 2, 2019 - link
I mean maybe they can have a 40% gross margin but that's not very useful if they don't have any operating income. There would be no healthy operating margin without volume sales.GreenReaper - Tuesday, July 2, 2019 - link
Why? If you don't care about RTX support, but do care about free and open source driver support on Linux. Admittedly not a huge market, but it's there.Phynaz - Tuesday, July 2, 2019 - link
AMD Linux drivers are horribleMeteor2 - Saturday, July 6, 2019 - link
The market isn't huge, it's *miniscule*. It really is a rounding-error.AshlayW - Tuesday, July 2, 2019 - link
Long term Radeon Fan here. but probs gonna make the switch.That RTX 2070 Super looks awfully good to me. And I'm absolutely *in love* with the Turing reference card design, and the chrome/shiny Super one looks so good. Aaaaa what is wrong with me? Haha - I just feel this, plus some ability to use hardware RT on my 144 Hz 1080p monitor in Metro Exodus (Something I'm waiting to replay with DXR), and almost a 2080 for 200 bucks less.
We'll see what AMD cards are like, but 2070 Super is probably going to be faster than the 5700XT and with the added features etc, I feel it will offer the same or more value whilst giving me that ability to play with DXR.
Course, there's always the 3rd option of: Don't buy anything this year cuz all I play is Warframe and the RX 590 is giving me 120FPS+ in that. Buuuut...
The itch. It's real.
V900 - Tuesday, July 2, 2019 - link
Must be tiring to always have your hopes up for the next card, and always having them turn out kinda meh.Good call, since it looks like a while before AMD will have something competitive. (Or something with hardware RT!)
As a future PS5 owner, In starting to get worried.
Korguz - Wednesday, July 3, 2019 - link
V900.. the more tiring part.. is the over priced hardware that is out there.. gotta love lack of competition...Meteor2 - Saturday, July 6, 2019 - link
The future is Stadia...ballsystemlord - Tuesday, July 2, 2019 - link
@Ryan Please run your full suite of compute benchmarks on the up coming 5700 series. It's important to me. Thanks!biodoc - Tuesday, July 2, 2019 - link
I agree. Thanks!Ryan Smith - Tuesday, July 2, 2019 - link
To be sure, what would you like to see that wasn't in this article?ballsystemlord - Tuesday, July 2, 2019 - link
For me personally, the folding @ home fp32 and fp64 are very revealing of a cards compute performance.I think AT authors were looking into it, but if you added a compute benchmark involving blender for 2019 I'd be tickled.
Thanks!
catavalon21 - Tuesday, July 2, 2019 - link
F@H fp64 tests were tossed a while back when both teams green and red castrated double precision (DP) capability in consumer cards. AMD had stellar DP ratios later than Nvidia did I believe, but neither does any more. On the other hand, I'm very curious to see what kind of scientific app could make use of the blistering half precision capability of today's cards.Threska - Friday, July 5, 2019 - link
Deep Learning is the current buzzword. There's your use case.catavalon21 - Sunday, July 21, 2019 - link
Good point, thanks.Bp_968 - Tuesday, July 2, 2019 - link
I only skimmed the article but I didn't see any info on if RTX performance also increased across the line? As someone who owns a 2560x1440 monitor, any "RTX" featured card I get has to be able to perform at 2560x1440x60fps+ "RTX-ON" or its a worthless feature to me. Last time i checked the 2080 *barely* met that spec in current games and the 2080ti was really you best choice if you wanted to maintain a solid 60fps with RTX.V900 - Tuesday, July 2, 2019 - link
Well yes, of course it has. When you buy a 2070 you’re basically getting a 2080 now.GreenReaper - Tuesday, July 2, 2019 - link
No, it's a fair question, because we don't know how fast those cores are running, or if that impacts everything else (considering they might take significant power).ballsystemlord - Tuesday, July 2, 2019 - link
I agree, does RTX benefit from the additional clock speed and are more of the cores enabled, or does the GPU downclock when running RTX like an Intel processor when doing AVX?GreenReaper - Tuesday, July 2, 2019 - link
RTX 2060S: 30->34 RT cores, 2070S: 36->40 cores, from spec sheets at:https://www.pcworld.com/article/3406396/nvidia-gef...
Phynaz - Tuesday, July 2, 2019 - link
What?V900 - Tuesday, July 2, 2019 - link
I hate to be that guy, but did you read the article? It answers all of that.For example, the 2070S is a 2080 chip with a few of the CUDA cores fused off, but is clocked 100 MHz higher than the 2080 and 200 MHz higher than the old 2070.
GreenReaper - Tuesday, July 2, 2019 - link
Are "CUDA cores" the same as "RT cores"? The article actually talks about "SMs" - a term not defined before use. Perhaps they are all the same thing. In any case, not in the specifications, so easy to miss.GreenReaper - Tuesday, July 2, 2019 - link
Also, are all parts of the CPU in the same clock domain?If not, the speed of one part may not relate to another.
(And if so, they're arguably not separate "cores".)
chowmanga - Tuesday, July 2, 2019 - link
Why are you only showing cards from the current generation in the benchmarks? Most of the people who opt for a lower priced card like the RTX 2070 aren't going to be in the market for getting a card upgrade to 2070 Super. Looking at the Steam Survey, we see that the most owned cards come from the last generation and the 750Ti is still in the top 10 of most popular cards! Owners of cards even one generation old can't compare their cards in the GPU 2019 benchmark section as it is only populated by new cards. It seems like you've neglected to consider the audience who would be in the market for old cards.Sincerely,
Disgruntled 980Ti owner
Ryan Smith - Tuesday, July 2, 2019 - link
"Why are you only showing cards from the current generation in the benchmarks?"Short answer: lack of time. It takes a lot of time to put together a new GPU benchmarking suite, and NVIDIA's launch inopportunely arrived right in the middle of that. So I only had a few days to benchmark cards.
GPU Bench 2019 will get filled out with more cards over time, including 980 series cards.
chowmanga - Wednesday, July 3, 2019 - link
Good to know, thanks.Ananke - Tuesday, July 2, 2019 - link
If AMD prices the RX5700 at $299 and 5700XT at $399, it will absolutely devastate NVidia. I kinda doubt AMD has the marketing team to understand that, but semi aggressive high end product prices can bring them half of the GPU market, especially if well executed with Ryzon alliance. AMD has once in a lifetime opportunity now. Next year, Intel will enter the game, and it will be very competitive market to do anything with high margins.webdoctors - Tuesday, July 2, 2019 - link
LOL, at those prices it would devastate AMD. They have employees to pay, as well as foundries for the chips. They're not a charity. They need to pay for the 8GB of RAM and the dies at 7nm won't be cheap now, maybe next yr. These dies are bigger than CPUs. Compared to a CPU you're getting a PCB and DRAM as well so you can't compare it to the Ryzen CPU pricing.Bensam123 - Tuesday, July 2, 2019 - link
Please cover AMDs input delay reduction technology in the review of the 5700 series along with Nvidias. It's not just about raw FPS (or frame times). Input delay matters a lot.pandemonium - Wednesday, July 3, 2019 - link
Quick fix for your table:RTX 2080 Super
Launch Date 07/23/2018
Ryan Smith - Wednesday, July 3, 2019 - link
Thanks!eek2121 - Wednesday, July 3, 2019 - link
Why wasn't the Radeon VII included in the charts? Just curious, I may have missed why as I only skimmed the article. I'll stick with my 1080ti for now. It does a great job and costed me quite a bit less than current top of the line NVIDIA cards. Maybe next year NVIDIA will offer something worth having.Kevin G - Wednesday, July 3, 2019 - link
Hrm. Looking at the cards else where, it appears that the RTX 2070 Super supports nvLink. A dual RTX 2070 Super is an interesting alternative to the RTX 2080 and RTX 2080 Super which might lead in performance per dollar over those cards. A dual RTX 2070 Super setup might give a single RTX 2080 Ti a performance challenge due to the higher clocks on the RTX 2070 Super. This would be an intersting thing to test alongside the RTX 2080 Super.isthisavailable - Wednesday, July 3, 2019 - link
Meanwhile, rip us budget plebs who were looking for improvements in $100-150 range. The 1650 is a huge disappointment and I see no new cards from AMD on the horizon.Ananke - Wednesday, July 3, 2019 - link
I've got RX590 for $160 a month ago, and it does everything 1080p on Ultra settings. I bet AMD will have at least a dozen cards under $200 range. 7nm tech makes things quadro chipper for them. I am not bashing NVidia technology but it's simply pricey for a retail consumer. Apparently the future will be renting play time from cloud gaming aka Google Playscineram - Saturday, July 6, 2019 - link
They certainly have the Navi14 chip on the horizon. Probably in the fall, maybe 24-32 CU.rtho782 - Wednesday, July 3, 2019 - link
Hm, the benchmarks are very limited, it would be much better if some other gpus (1080ti? 980ti? etc) were in it!imaheadcase - Wednesday, July 3, 2019 - link
He mentioned it will be in full review of the new amd card.alexdi - Wednesday, July 3, 2019 - link
Blah, blah, whatever. They still cost too much. The 'cheap' card you dismissed is priced like the flagship ten years ago. Real incomes have, if anything, gone down. If Nvidia wants to be the Apple of GPUs, they're welcome to it, but they're going end to with similar marketshare in the desktop space.yacoub35 - Wednesday, July 3, 2019 - link
I wish I could see how the 2060 Super compares to the previous generation like the GTX 1070. I can't even do that in your Bench, apparently, which is disappointing. Have to go searching other sites to find the answer to the most important question of just how much greater performance a 2060 Super offers over the GTX 1070 or 1070 Ti.milkod2001 - Wednesday, July 3, 2019 - link
What did you find out? That 1070 Ti is pretty much on pair with: 2060 Super?Haawser - Wednesday, July 3, 2019 - link
$450-500 for mid range cards ? Think I'll buy a $250 second hand Vega 56, undervolt it, and play quite happily at 1440p. Maybe at high instead of ultra, but for $250 I can live with that.Gunbuster - Friday, July 5, 2019 - link
PowerColor AMD Radeon Vega 56 RED Dragon is $300 on amazon. $250 if you have an Amex with points and got the targeted 20% off discount.Gastec - Wednesday, July 17, 2019 - link
They are actually $100 more from partners.catavalon21 - Wednesday, July 3, 2019 - link
TH shows the 2060 Super outperforming the GTX 1080 in a dozen or so benchmarks at 1440. Only one test went in favor of the 1080.https://www.tomshardware.com/reviews/nvidia-geforc...
V900 - Thursday, July 4, 2019 - link
Actually, the 2060S isn’t just faster than the 1070 or 1070TI, it also beats the 1080 in most games/benchmarks.And that’s just in terms of performance. Without including the features like RTX and DLSS that the 10XX series lack.
It’s a pretty solid upgrade for a few hundred dollars.
YouInspireMe - Wednesday, July 3, 2019 - link
Wait! They just announced the 2080 super DUPER for $698.zodiacfml - Thursday, July 4, 2019 - link
There is nothing in exciting in 2019 video cards unless AMD will introduce massive price cuts considering the small die size of the chips in the RX 5700none12345 - Thursday, July 4, 2019 - link
2060 super is worse value then a 2060 was(fps/$), value went down not up. It only looks like it went up when you compared it to the extremely poor value 2070.2070 super is better value then the only 2070, same price but higher performance. But, yet again the old 2070 was crap value, so ya its better....but its still not good.
Compared to the cards from 3.5 years ago, the 20 series super cards are still rather poor value. We should have had much more performance for the same $s by now.
UltraLeader - Thursday, July 4, 2019 - link
nVidia's problem is 4K performance issue. Every two years period nVidia need to provide new cards that faster two name cards. For example, GTX 980Ti = GTX 1070. GTX 1080Ti = RTX 2070. Definitely RTX 2070 is much much slower than 1080Ti. That means nVidia hugely reduced their GPU speed (reduced CUDA cores). No one really care about ray tracing if it can not run 4K 60+ FPS. GTX 10 series to RTX Super is 3 years long !!! But 2070 Super still slower than 1080Ti !!! and 2080 Super should be same price as GTX1080 ($499~$559) not $699 !!! $699 is for 2080Ti. Even now 2080Ti still extremely overpriced ($1249) no even close MSRP $999. and 2080Ti is one year old already. so price should be $599 !!! and nVidia should release 2080Ti Super for $699. and old card like 2060 should price $199. 2070 = $299. 2080 = $459. 2080Ti = $599. and new crap card should only add $50 only. so 2060 Super = $249. 2070 Super = $349. 2080 Super = $499. 2080Ti Super = $749.UltraLeader - Thursday, July 4, 2019 - link
For 3 years waiting. nVidia should release graphic card run 4K 120~180FPS on 2080Ti Super not 4K 60 FPS !!!UltraLeader - Friday, July 5, 2019 - link
nVidia should release 2080Ti Super with 7,500 CUDA cores. No one care ray tracing that run below 60 FPS !!! and remove DLSS !! crap tech !!atiradeonag - Friday, July 5, 2019 - link
Some ppl get pretty jelly seeing 2070S beating their flagshipMeteor2 - Saturday, July 6, 2019 - link
Who?Sychonut - Friday, July 5, 2019 - link
These would have performed admirably on Intel's 14+++++ node.FMinus - Friday, July 5, 2019 - link
tech press needs to give more shit to both Nvidia and AMD for pricing this shit so high. xx60 class for $400 c'mon, this was once $250-300. Same goes for AMD, but they are just price matching -10%, but they'd price just as high if they were on the top, it's just going out of hand. And let's not even get into the "higher" end cards, because prices there are just beyond help. Majority of the people on this planet don't earn enough to even buy the 2070, let alone 2080 or 2080Ti, let alone the new 2060 Super. It's beyond belief and the tech press is quiet and even praises them for delivering all of this at this prices, because it's 10-15% faster...Meteor2 - Saturday, July 6, 2019 - link
It's no different than with CPUs. Improvements are harder to find. For sure, cards *are* faster per price with each new generation.Improved architectures and smaller processes have allowed more fps in the ~300W limit, resulting in new higher pricing tiers. But all cards, at all price levels, are faster.
Questor - Friday, July 5, 2019 - link
And prices are still too high.And RTX is still not a thing.
And once again, Nvidia buyers take it rough, dry and end up with a sore backside.
It's amazing to me. Nvidia speeds them up a bit, runs roughshod over their customers keeps prices still too high and suddenly everyone and their goldfish is praising Nvidia like they are the second coming here to save us all from life in the pit of hell. Not buying it, literally and figuratively.
Just say, "NO!"
Nfarce - Saturday, July 6, 2019 - link
I'm going to say NO to your comment. I want the best for MY MONEY. AMD does not provide. Period and end of discussion.Korguz - Saturday, July 6, 2019 - link
so you are happily paying nvidia's prices then ?? wow.. must be nice to have more money then you know what to do with....eastcoast_pete - Friday, July 5, 2019 - link
Just saw this on videocardz.com, which often gets leaked (and correct) information, about AMD supposedly lowering the prices of its Navi cards for the July 7th launch. Verbatim, copied from their posting:"The information on new pricing is under embargo till July 6th. We will let know you know as soon as we hear more.
Update, new pricing (two confirmations):
RX 5700 XT 50th Anniversary: $449
RX 5700 XT: $399
RX 5700: $349"
As they write, the information is under embargo until tomorrow (so Ryan can't write about it if he still wants to get pre-release review samples to test), but they (videocardz) seem to not be so troubled about spilling the beans.
If true, I like to say (even as a critic of NVIDIA and its over-pricing when they can): Thank you, NVIDIA. Let's hope these price cuts by AMD are true, and that this is the beginning of a long-overdue price war!
Gastec - Wednesday, July 17, 2019 - link
Obscene prices!LillyGye - Wednesday, July 24, 2019 - link
Wow its really cool article, but you can do your essay at https://academized.com/writing-services/do-my-essa... it`s really coolpcgpus - Wednesday, September 11, 2019 - link
Nice review. 2060 Super and 2070 Super looks very well.If you want to compare this article with other services You have to go on this link:
https://warmbit.blogspot.com/2019/08/zestawienie-2...
There are results from 9 services in 3 resolutions from 30 games!
After page load please pick up your language from google translate (right side of page).