gtx at rtx prices. not really a fan of that graph at the end. I mean 1080 ti were about 500 about half a year ago. the perf/dollar is surely less than -7% more like -30%. as well due to the 36% perf gain quoted being inflated as hell. double the price and +20% perf is not -7% anand
They are comparing them based on their launch MSRP, which is fair.
Actually, it seems they used the cut price of $500 for 1080 instead of the $600 launch MSRP. The perf/$ increases by ~15% if we use the latter, although it's still a pathetic generational improvement, considering 1080's perf/$ was ~55% better than 980.
In all fairness when comparing products from 2 different generations that are both still on the market you should compare on both launch price and current price. The purpose is to know which is the better choice these days. To know the historical launch prices and trends between generation is good for conformity but very few readers care about it for more than curiosity and theoretical comparisons.
The 1060 has been in retail for 2.5 years so the perf gains offered here a lot less than what both Nvidia and AMD need to offer. They are pushing prices up and up but that's not a long term strategy.
Then again, Nvidia doesn't care much about this market, they are shifting to server, auto and cloud gaming. In 5 years from now, they can afford to sell nothing in PC, unlike both AMD and Intel.
Did you actually read the article before commenting on it? It is right there, on the last page - 21% increase in performance/dollar, which added with the very decent gain in performance/watt would suggest the company is anything but just sitting on their laurels. Unlike another company, which has been brute-forcing an architecture that is more than a decade old, and squandering their intellectual resources to design budget chips for consoles. :P
We didn't wait 2.5 years for such a meager performance increase. Architecture performance increases were much higher before Turing, Nvidia is milking us, can't you see?
DING ! I know it's my own bias, but branding looks like a typical, on-going 'bait-and-switch' scam whereby nVidia moves their goal posts by whim -- and adds yet another $100 in retail price (for the last 2 generations?). For those fans who spent beeg-buckeroos on a GTX 1070 (or even a 1060 6GB), it's The Way You Meant to Be 'Ewed-Scrayed.
Do you remember how much cpus used to improve From generation to generation... 3-5%... That was when there was no competition. Now when there is competition we see 15% increase between generations or less. Well come to the future of GPUs. 3-5 % of increase between generations if there is not competition. Maybe 15 or less if there is competition. The good point is that you can keep the same gpu 6 year and you have no need to upgrade and lose money.
"Instead, AMD’s competitor for the GTX 1660 Ti looks like it will be the Radeon RX Vega 56. The company sent word last night that they are continuing to work with partners to offer lower promotional prices on the card, including a single model that was available for $279, but as of press time has since sold out. Notably, AMD is asserting that this is not a price drop, so there’s an unusual bit of fence sitting here; the company may be waiting to see what actual, retail GTX 1660 Ti card prices end up like. So I’m not wholly convinced we’re going to see too many $279 Vega 56 cards, but we’ll see."
If Vega 56 becomes available for $279 regularly, then it will be a better deal. Right now, that price is only being offered on one model that you can't buy. The cheapest Vega 56 model on Newegg is $399 right now.
You can get it for the same price, move on, it is the better deal. Also, expect higher price than MSRP from OEM creating special models with different coolers.
And basically... RX 3080 is supposed to be between 250-300$ with Vega 64 + 15% performances. The interest of this card is going to be short lived.
Yes, we know -- AMD's next card is always going to be the one to buy.
But isn't it odd that by the time it stops being the "next" card and becomes the current card, suddenly it isn't that appealing anymore, folks like you immediately move onto the next "nest" card? Vega was the "next" card a year ago ...
Not anymore, no. As far as I've checked, all the $279 Vega 56s have sold out at the moment, and with AMD stating that it's a temporary price cut, I'm not expecting anymore $279 Vegas to come our way.
I feel like they don't realize that until they improve the performance per $$$ there is very little reason to upgrade. I'm happy sitting on an older card until that changes. Though If I were on a lower end card I might be kicking myself for not just buying a better card years ago.
Since the bracket price moved up so much for relative performance at higher price point from the last generation, there is absolutely no reason for upgrading. That is different if you need a GPU.
Agreed. It's kind of wild that I have to pay $350 to get on average 10fps better than my 980ti. If I want a real solid performance improvement I have to essentially pay the same today as when the 980ti was brand new. The 2070 is anywhere between $500-$600 right now depending on model and features. IIRC the 980ti was around $650. And according to Anantech's own benchmarks it gives on average 20fps better performance. That 2 generations, 5 years and I get 20fps for $50 less? No. I should have a 100% performance advantage for the same price by this point. Nvidia is milking us. I'm eyeballing it a bit here but the 2080Ti is a little bit over double the performance of a 980Ti. It should cost less than $700 to be a good deal.
I agree in that this card is a tough sell over a RTX2060. Most consumers are going to spend the extra $60-$70 for what is a faster, more well-rounded and future-proof card. If this were $100 cheaper it'd make some sense, but it isn't.
I'm not so sure about the value prospects of the 2070. The banner feature, real-time ray tracing, is quite slow even on the most powerful Turing cards and doesn't offer much of a graphical improvement for the variety of costs involved (power and price mainly). That positions the 1660 as a potentially good selling graphics card AND endangers the adoption of said ray tracing such that it becomes a less appealing feature for game developers to implement. Why spend the cash on supporting a feature that reduces performance and isn't supported on the widest possible variety of potential game buyers' computers and why support it now when NVIDIA seems to have flinched and released the 1660 in a show of a lack of commitment? Already game studios have ditched SLI now that DX12 pushed support off GPU companies and into price-sensitive game publisher studios. We aren't even seeing the hyped up feature of SLI between a dGPU and iGPU that would have been an easy win on the average gaming laptop due in large part to cost sensitivity and risk aversion at the game studios (along with a healthy dose of "console first, PC second" prioritization FFS).
What I think you're missing is that the DirectX rendering API set by Microsoft will be implemented by all parties sooner or later. It really *does* met a need which has been approximated in any number of ways previously. Next generation consoles are likely to have it as a feature, and if so all the AAA games for which it is relevant are likely to use it.
Having said that, the benefit for this generation is . . . dubious. The first generation always sells at a premium, and having an exclusive even moreso; so unless you need the expanded RAM or other features that the higher-spec cards also provide, it's hard to justify paying it.
I'm not sure about that. It is also an increase in thermals and power consumption that also costs money overtime. RTX advantage is basically null at that point unless you want to play at low FPS so 2060 advantage is 'merely' raw performance.
For most people and current games 1160 already offers ultra great performance so not sure if people gonna shell out even more money for the 2060 since 1160 is already a tad expensive.
1160 seems to be an awesome combination of performance and efficiency. Would it be better $50 lower? of course but why? since they don't have real competition from AMD...
Why would nvidia give up of a market that costs them almost nothing ? if 5 years from now they do cloud gaming then they pretty much are still doing GPU. Anyways even in 5 years cloud gaming will still be a minor part of the GPU market.
"They are pushing prices up and up but that's not a long term strategy."
That comment completely ignores the massive increase in value over both the RX 590 and Vega 56. Nividia produces a card that both makes the RX590 at the same pricepoint completely unjustifiable, and prompts AMD to cut the price of the Vega 56 in HALF overnight, and you are saying that it is *Nvidia* not *AMD* that is charging high prices?!?! I've always thought the AMD GPU fanatics who think AMD delivers more value were somewhat delusional, but this comment really takes the cake.
AMD selling overpriced cards does not subtract from the point that Nvidia is also attempting to raise the price as well. Both companies have put out underwhelming products this gen
Aww, I was hoping this release would lower the price on those used 1070's. Oh well. I'll still probably go for a used 1070 over this one. Nearly identical in every way and can be found for as low as $200.
Reading "Turing Sheds" in the headline makes me wonder what he could have done with a couple of these at Bletchley Park (which, for anybody passing, is well worth the steep entry fee — see bletchleypark.org.uk).
Sorry for the interruption. I'll return you to the normal service.
"Now the bigger question in my mind: why is it so important to NVIDIA to be able to dual-issue FP32 and FP16 operations, such that they’re willing to dedicate die space to fixed FP16 cores? Are they expecting these operations to be frequently used together within a thread? Or is it just a matter of execution ports and routing?"
It seems pretty likely that they added the FP16 cores because it simplified design, drivers, etc. It was easier to just drop in a few (as you mentioned) tiny FP16 cores than it was to change behavior of the architecture.
FP16 is a way to simplify shading computing over the common used FP32. They allow for higher bandwidth (x2) and higher speed (x2, so half the energy for the same work) with the same HW space occupation. It was a feature used in HPC where bandwidth, power consumption and of course computation time are quite critical. They then ended in game class architecture just because they have find a way to exploit it there too. Some games have started using FP16 for their shading. On AMD fence, only Vega class cards support packed FP16 math.
The use of a INT ALU that executes integer instructions together with the FP ones is instead an exclusive feature that can really improve shading performance much more than any other complex feature like high threaded (constantly interrupted) mechanism that is needed on architectures that cannot keep the ALUs feed. In fact we see that with less CUDA cores Turing can do the same work of Pascal even using less energy. And no magic ACE is present.
They didn't just drop in a few. It seems they have enough for 2x FP32 performance. Why are they dual issue? My guess is it is because that is what's necessary for Tensor Core operation. I think NVIDIA is being a bit secretive about the Tensor Cores. It's clear they took the RT Core circuitry out of the Turing minor die. As far as the Tensor Cores, I'm not so sure. Think about it this way: suppose Tensor Cores really are specialized separate cores. Then they also happen to have the capability of non tensor FP16 operation in dual issue with FP32 CUDA cores? Because if they don't then whatever functionality NVIDIA has planned for the FP16 cores on Turing minor would be incompatible with Turing major and Volta. I don't see how that can be the case, however, because, according to this review, Turing major is listed as the same CUDA compute generation as Turing minor. Now if the Tensor Cores can double as general purpose FP16 CUDA cores, then what's to say that FP16 and FP32 CUDA cores can't double as Tensor Cores? That is, if the Tensor Core can be made with two data flow paths, one following general purpose FP16 operations and one following Tensor Core instruction operations, then commutatively a general purpose CUDA core can be made with two data flow paths, one following general purpose operations and one following Tensor Core instruction operations.
When Turing came out with Tensor Core operations but with FP64 cores cut from the die and no increase in FP32 CUDA cores per SM over Volta I was surprised. But with this new information from the Turing Minor launch it makes more sense to me. I don't know if they have the dedicated FP16 cores on Volta. If they do then the FP64 cores don't need to play the following role, but if they are able to use the FP64 cores as FP16 cores then hypothetically they have enough cores to account for the 64 FMA operations per clock per SM of the 8 Tensor Cores per SM. But on Turing major they just didn't have the cores to account for the Tensor Core performance. These FP16 cores on Turing minor seem to be exactly what would be necessary to make up for the shortfall. So, my guess is that Turing major also has these same cores. The difference is either entirely one of firmware/drivers that allows the Tensor Core data path to be operated on Turing major but not Turing minor or Turing major has some extra circuitry that allows the CUDA cores to be lashed together with an alternate data flow path that doesn't exist in Turing minor.
Agreed. It seems likely that most of the hardware is present, just not active.
Frankly, it's not clear why these couldn't be binned versions of the higher-level chips that haven't met the QA requirements, which would be one reason it took this long to release - you need enough stock to be able to distribute it. If it's planned out in advance, they just need X good CUDA cores and Y ROPs that run at Z Mhz, combined with at least [n] MB of cache. Fuse off the bad or unwanted portions to save on power and you're good.
Of course it *could* be like Intel, which truly make smaller derivatives. If so that suggests they'll be selling a lot of these cards. Even then, though, Yojimbo's supposition about the core design being essentially the same is likely to be true.
Yeah the die size and transistor count is still large for the number of CUDA cores, being that this review claims the 1660Ti has all SMs on the TU116 enabled. I said it was clear they took RT circuitry out. But I was wrong, that's not clear. It seems the die area per CUDA core and transistors per CUDA core of the TU116 are extremely close to the TU106, which is fully-enabled in RTX 2070. If this is the result of the INT32 and FP16 cores of the TU116 then where exactly do any cost savings of removing the Tensor Cores and RT Cores come from? Definitely the cost of completely re-architecting another GPU would outweigh the slight reduction in die size they seem to have achieved.
On the other hand, I'd imagine TU116 will be such a high volume part that unless yields are really lousy, binning alone won't provide enough chips (and where are the fully enabled versions of the 284 mm^2 RTX dies going, anywhere? No such product has thus far been announced.) Perhaps such a small number of RT cores was judged to be insufficient for RTX gaming. Even if not impossible to create some useful effects including that many RT cores, if developers were incentivized to target such few RT cores with their RTX efforts because the volume of such RT-enabled cards was significant then they may reduce the scope and scale of RTX enhancements they undertake, putting a drag on the adoption of the technology. So NVIDIA opted to disable the RT cores, and perhaps the Tensor Cores, present on the dies even when they are actually fully functioning. Perhaps it was simply cheaper to eat the wasted die space per chip than to design an entirely new GPU with the RT cores and Tensor Cores removed.
My guess is that in the next (7 nm) generation, NVIDIA will create the RTX 3050 to have a very similar number of "RTX-ops" (and, more importantly, actual RTX performance) as the RTX 2060, thereby setting the capabilities of the RTX 2060 as the minimum targetable hardware for developers to apply RTX enhancements for years to come.
I wish there were an edit button. I just want to say that this makes sense, even if it eats into their margins somewhat in the short term. Right now people are upset over the price of the new cards. But that will pass assuming RTX actually proves to be successful in the future. However, if RTX does become successful but the people who paid money to be early adopters for lower-end RTX hardware end up getting squeezed out of the ray-tracing picture that is something that people will be upset about which NVIDIA wouldn't overcome so easily. To protect their brand image, NVIDIA need a plan to try to make present RTX purchases useful in the future being that they aren't all that useful in the present. They can't betray the faith of their customers. So with that in mind, disabling perfectly capable RTX hardware on lower end hardware makes sense.
As a SFFPC (mITX) user, I'm enjoying the thicker, but shorter, card as it makes for easier packaging. Additionally, I'm enjoying the performance of a 1070 at reduced power consumption (20-30w) and therefore noise and heat!
Thanks! Also a bit disappointed by NVIDIA's continued refusal to "allow" a full 8 GB VRAM in these middle-class cards. As to the card makers omitting the VR required USB3 C port, I hope that some others will offer it. Yes, it will add $20-30 to the price, but I don't believe I am the only one who's like the option to try some VR gaming out on a more affordable card before deciding to start saving money for a full premium card. However, how is VR on Nvidia with 6 GB VRAM? Is it doable/bearable/okay/great?
>However, how is VR on Nvidia with 6 GB VRAM? Is it doable/bearable/okay/great?
It's 'fine' - the GTX 1050ti is VR capable with only 4gb VRAM, although it's not really advisable (see Craft computings 1050ti VR assessment on youtube - it's perfectly useable and a fun experience). The RTX 2060 is a very capable VR GPu, with 6gb VRAm. It's not really VRAM that is critical in VR GPU performance anyway - more the raw compute performance in rendering the same scene from 2 viewpoints simultaneously. So, I'd assess that the 1660ti is a perfectly viable entry level VR GPU. Just don't expect miracles.
This article reads a little like that infamous Steve Ballmer developers thing except it's not "developers, developers, developers, etc" but "traditional, traditional, traditionally, etc." instead. Please explore alternate expressions. The word in question implies long history which is something the computing industry lacks and the even shorter time periods referenced (a GPU generation or two) most certainly lack so the overuse stands out like a sore thumb in many of Anandtech's publications.
Yup, for 120W TDP of all things. But it's in the charts as a 2.75 slot width card so EVGA is probably hoping that no one understands how expansion slots actually would not permit the remaining .25 slot width to support anything.
Awesome review but you guys always missed the target audience. Lots of gamers are looking for the benchmarks of online games like PUBG, Fortnite, Apex, Overwatch, etc...
Anandtech is a highly technical hardware review site, not a pop culture gaming site. The benchmarks are meant to be a highly repeatable, representative sample. Online multiplayer-only games are rarely repeatable run to run due to netcode and load variations, and you can often only run on the latest patch meaning you can't make an apples to apples comparison with older tests.
rwsgaming " but you guys always missed the target audience. Lots of gamers are looking for the benchmarks of online games like PUBG, Fortnite, Apex, Overwatch, etc..." none of the games they use for testing.. are ones i play.. so meh... hehehhehe
I started reading this article until the SSD buyers guide video started taking up 1/4 of my screen space after scrolling down a bit. I'll read about the card on a site that doesn't take up so much of my screen space for something I have no interest in. This site sucks so much since Anand sold it.
At 280$ for a Vega 56 with 3 games, it is brainless and one of the best value as of late. Can't wait for Navi to disrupt even more this overdue stagnant market.
Yes, it will be a new black hole in AMD quarters if the production cost/performance is the same as the old GCN line... You see, selling as HBM monster like Vega for that price simply means that the project is a compete flop (as it was Fiji) and nvidia can continue selling its mainstream GPU at the price they want despite the not so good market period.
They disable those before benchmarking. From the article: "For our testing, we enable or adjust settings to the highest except for NVIDIA-specific features"
This is such a joke. Vega 56 is now the same price and out performs this terrible product, and the 1070 (AIB versions) performs similarly enough that the 1660ti has no real place in the market right now. Nvidia is a greedy terrible company. What a joke.
I followed your advice and bought a Vega56 instead of a 1660Ti and now my power supply has been making those weird noises animals make wen they're suffering and need help what do I do?
Fanboy nonsense alert!!! Unless you bought your power supply at a Chinese flea market, ignore this dude.
(Granted there are totally cases where you'd want something like a 1660Ti over a V56 for efficiency reasons [say ultra SFF], but this guy's spitting nonsense)
The V56 uses ~200w nominally depending on your choice of settings, in the detailed Tom's review it goes as low as 160w at the most minimum performance level and as high as 235w depending on the choice of power BIOS. The 1660Ti is then shown to use ~125w in BF1 and (assuming Tom's tested the V56 performance on stock settings) Anand's BF1 test shows a 9FPS lead (11%) over the 1660Ti. I'll trade that 11% performance for 40% less (absolute scale) power usage any day - My PSU ain't getting any younger and "lol just buy another one" is dumb advice dumb people make.
Not criticizing, simply adding: Several times in the past, honest review sites did comparisons of electrical costs in several places around the States and a few other countries with regard to brand A video card at a lower power draw than brand B video card. The idea was to calculate a reasonable overall cost for the extra power draw and if it was worth worrying about/worth specifically buying the lower draw card. In each case it was negligible in terms of addition power use by dollar (or whatever currency). A lot of these great sites have died out or been bought out and are gone now. It a darned shame. We used to actually real useful information about products and what all these values actually mean to the user/customer/consumer. We used to see the same for power supplies too. I haven't seen anything like that in years now. Too bad. It proved how little a lot of the numbers mattered in real life to real bill paying consumers.
Man this sucks, clearly this card isn't enough for 4k and I'm not willing to spend on a RTX 2070. Can I hope for a GTX 1170 at like $399? 8gb of RAM please. I'm not buying a new card until it's $400 or less and has 8gb+, my 970 runs 1440p maxed or close to it in almost all AAA games and even 4k in some (like Overwatch) so I'm not going for a small improvement - after 2 gens I should be looking at close to double the performance but it sure doesn't look like that's happening currently.
And I think he will be even more disappointed if he's looking for a 4K card that is able to play with <b>modern</b> games.
BTW: No 1170 will be made. This card is the top Turing without RT+TC and so it's the best performance you can get at lowest the price. Other Turing with no RT+TC will be slower (though probably cheaper, but you are not looking for just a cheap card, you are looking for a x2 the performace of your actual one).
Huh, let's see... designing a new chip costs a lot of money, especially when it is not that tiny. A chip bigger than this TU116 will be just faster than the 2060, which has a 445mm^2 die size which has to be sold with some margins (unlike AMD that sells Vega GPU+HBM at the price of bread slices and at the end of the quarter reports gains in the amount of the fractions number of nvidia, but that's good for AMD fans, it is good that the company looses money to make them happy with oversized and HW that performs like mainstream competition one). So creating a 1170 simply means killing the 2060 (and probably 2070), just defeating the original purpose of these cards as first lower HW (possible mainstream) capable of RT.
Unless you are supposing nvidia is going to scrap completely their idea that RT is the future and it's support will be expanded in future generations, there's no a valid, rationale reason for them to create a new GPU that will replace the cut version of TU106.
All this without considering that AMD is probably not going to compete on 7nm as with that PP they will probably manage to reach Pascal performance while at 7nm nvidia is going to blow any AMD solution away under the point of view of absolute performance, performance per W and performance per mm^2 (despite the addition of the new computational units that will find more and more usage in the future.. none still has thought of using tensor core for advanced AI, for example).
So, no, there will be no a 1170 unless it will be a further cut of TU106 that at the end will perform just like TU116 but will be just a mere recycle of broken silicon.
Now, let me hear what makes you believe that a 1170 will be created.
I do not know if they will create an 1170 or not; to be fair, I am surprised they even created the 1160. You have a very good point, upon reflection, it is quite likely such a product would impact RTX sales. I was just curious what had you thinking that way.
Hey not sure if you're opposed to used GPUs... but you can get a used, overclocked, 3rd party GTX 1080 with 8GB vram on eBay for about $365-$400. In my opinion it's an amazing deal and I can tell you from experience that it would satisfy the performance jump that you're looking for. It's actually the exact situation I was in back in June of 2016 when I upgraded my 970 to a 1080. Being a proper geek, I maintained a spreadsheet of my benchmark performance improvements and the LOWEST improvement was an 80% gain. The highest was a 122% gain in Rise of the Tomb Raider (likely VRAM related but impressive nonetheless). Honestly I don't believe I've ever experienced a performance improvement that felt so "game changing" as when I went from my 970 to the 1080. Maybe waaay back when I upgraded my AMD 6950 to a GTX 670 :). If "used" doesn't turn you off, the upgrade of your dreams is waiting for you. Good luck to you!
There is no way your 970 runs 1440p maxed in modern AAA games. Unless your definition of maxed includes frames well below 60 and settings well below ultra.
I have a 1060 and it needs medium to medium-high to reliably hold 60FPS @ 1440p.
$280 for ~40% on average better performance and still 6GB of memory? I already have a 6GB 1060. I suppose I have to wait for navi or 30 series before actually upgrading.
I guess you missed the part where their memory compression technology has increased performance another 20-33% over previous generation 10xx cards, negating the need to higher memory bandwidth and more space within the card. So, 6GB on this card is essentially like 8-9GB on the previous generation. That is what compression can do (as long as you can compress and decompress fast enough, which doesn't seem to be a problem for this hardware).
I'm not an expert on this topic, but they state compression is used as a mean to improve bandwidth, not memory space consumption.
Someone more knowledgeable can clear this up, but to my understanding textures are compressed when moving from vram to gpu, and not when loading from hdd/ssd or system memory into vram.
At those all rejoicing that Vega56 is selling for a slice of bread.. that's the end that failing architecture do when they are a generation behind. Yes, nvidia cards are pricey, but that's because AMD solutions can stand up the competition with them even with expensive components like HBM and tons more of W to suck.
So stop laughing about how poor is this new card price/performance ratio, after few weeks it will have the ratio that the market is going to give it. What we have seen so far is that Vega appeal has gone under the ground level, and as for any new nvidia launch AMD can answer only with a price cut, close followed by a rebrand of something that is OCed (and pumped with even more W).
GCN was dead at its launch time. Let's really hope Navi is something new or we will have nvidia monopoly on the market for another 2 year period.
I don't have any such hopes for Navi. The reason is that AMD is still competing for the console and part of that is maintaining backwards compatibility for the next generation of consoles with the current gen. This means keeping all the GCN architecture so that graphics optimizations coded into the existing games will still work correctly on the new consoles without the need for any work.
Uh... I don't think that follows. Yes, it will be a bonus if older games work well on newer consoles without too much effort; but with the length of a console refresh cycle, one would expect a raw performance improvement sufficient to overcome most issues. But it's not as if when GCN took over from VLIW, older games stopped working; architecture change is hidden behind the APIs.
Fallen Kell " The reason is that AMD is still competing for the console and part of that is maintaining backwards compatibility for the next generation of consoles with the current gen. " prove it... show us some links that state this
Aahahah.. prove it. 9 years of discounted sell are not enough to show you that GCN simply started as half a generation old architecture to end as a generation obsolete one? Yes, you may recall the only peak glory in GCN life, that is Hawaii, which was so discounted that made nvidia drop the price of their 780Ti. After that move AMD just brought up a fail after the other, starting with Fiji and it's monster BOM cost just to reach the much cheaper GM200 based cards and following with Polaris (yes, the once labeled "next AMD generation is going to dominate") and then again with Vega and Vega VII is not different.
What have you not understood that AMD has to use whatever technology available to get a performance near that of a 2080? What do you think will be the improvements nvidia will achieve once they will move to 7nm (or 7nm+)?
Today AMD is incapable to get to nvidia performances and they also lack their modern features. GCN at 7nm can be as fast a a 1080Ti that is 3 years older. Despite the AMD card still uses more power, which still shows how inefficient GCN architecture is.
That's why I hope Navi is a real improvement, or we will be left with nvidia monopoly as at 7nm it will really have more than generation of advantage, seen it will be much more efficient and still having new features that AMD will then add only in 3 or 4 years from now.
links to what you stated ??? sounds a little like just your opinion, with no links....
considering that AMD doesnt have the deep pockets that Nvidia has, or the part that amd has to find the funds to R&D BOTH gpus, AND gpus, while nvida can put all the funds that can into R&D, it seems to me that AMD has done ok for that that have had to to work with over the years, with Zen doing well in the CPU space, they might start to have a little more funds to put back into their products, and while i havent read much about Navi, i am also hopeful that it may give nvidia some competition, as we sure do need it...
It seems you lack the basic intelligence to understand the facts that can be seen by anyone else. You just have hopes that are based on "your opinion", not facts. "they might start to have a little more funds to put back into their products". Well, last quarter with Zen selling like never before they managed to have a 28 millions net income. Yes, you read right. 28. Nvidia with all its problems got more than 500. Yes, you read right. About 20 times more. The facts are 2 (this are numbers based on facts, not opinion, and you can create interpretations on facts, not your hopes of the future): - AMD is selling Ryzen CPU at a discount like GPUs and boths have a 0.2% net margin - The margins they have on CPU can't compensate the looses they have in the GPU market, and seen that they manage to make some money with console only when there are some spike of requests, I am asking you when and from what AMD will get new funds to pay something. You see, it's not that really "a hope" believing that AMD is loosing money for every Vega they sell seen the cost of the BOM with respect to the costs of the competition. Negating this is a "hope it is not real", not a sensible way to ask for "facts".
You have to know at least basic facts before coming here to ask for links on basic things that everyone that knows this market already knows. If you want to start looking less idiot than you do by asking constantly for "proves and links", just start reading quarter results and see what are the effects of the long period strategies both companies have achieved.
Then, if you have a minimum of technical competence (which I doubt) look at what AMD does with its mm^2 and Watts and what nvidia does. Then come again to ask for "links" where I can tell you that AMD architecture is one generation behind (and will be probably left once nvidia will pass to 7nm.. unless Navi is not GCN).
right now.. your facts.. are also just your opinion, i would like to see where you get your facts, so i can see the same, thats why i asked for links... again.. AMD is fighting 2 fronts, CPUs AND gpus.. Nvida.. GPU ONLY, im not refuting your " facts " about how much each is earning...its obvious... but.. common sense says.. if one company has " 20 times " more income then the other.. then they are able to put more back into the business, then the other... that is why, for the NHL at least, they have have salary caps, to level the playing field so some teams, cant " buy " their way to winning... thats just common sense... " AMD is selling Ryzen CPU at a discount like GPUs and boths have a 0.2% net margin " and where did you read this.. again.. post a link so we can see the same info... where did you read how much it costs AMD to make losing money on each Vega GPU ?? again.. post links so we can see the SAME info you are... " You have to know at least basic facts before coming here to ask for links on basic things that everyone that knows this market already knows. " and some would like to see where you get these " facts " that you keep talking about... thats basic knowledge.
" if you have a minimum of technical competence " actually.. i have quite a bit of technical competence, i build my own comps, even work on my own car when i am able to. " look at what AMD does with its mm^2 and Watts and what nvidia does. " that just shows nvidia's architecture is just more efficient, and needs less power for that it does...
lastly.. i have been civil and polite to you in my posts.. resorting to name calling and insults, does not prove your points, or make your supposed " facts " and more real. quite frankly.. resorting to name calling and insults, just shows how immature and childish you are.. that is a fact
where did you read how much it costs AMD to make losing money on each Vega GPU ??
Did you read last (and understood) AMD latest quarter results? Have you seen the total cost of production and the relative net income? Have you an idea of how the margin is calculated (yes, it takes into account the production costs that are reported in the quarter results)? Have you understood half of what I have written when based on facts that AMD just made $28 of NET income last quarter there are 2 possible ways of seeing the cause of those pitiful numbers? One is that AMD is discounting every product (GPU and CPU) to a ridiculous margin, the other that Zen is sold at a profit while GPUs are not? Or you may try to hypnotize the third view, that they are selling Zen at a loss and GPU with a big margin. Anything is good, but at leas one of these is true. Take the one you prefer and then try to think which one requires less artificial hypothesis to be true (someone once said that the best solution is most often the most simple one).
that just shows nvidia's architecture is just more efficient, and needs less power for that it does
That demonstrates that nvidia architecture is simply one generation head, as what changes from a generation to the other is usually the performance you can get from a certain die are which sucks a certain amount of energy and just is the reason because a x80 level card today costs >$1000. If you can get the same 1080Ti performance (and so I'm not considering the new features nvidia has added to Turing) by using a complete new PP node, more current and HBM just 3 years later, then you may arrive at the conclusion that something is not working correctly on what you have created.
So my statement that GCN was dead at launch (when a 7970 was on par with a GTX680 which was smaller and uses much less energy) just finds it perfect demonstration in Vega 20 where GCN is simply 3 years back with a BOM costing at least twice that of the 1080Ti (and still using more energy).
Now, if you can't understand the minimum basic facts and your hope is that their interpretation using a completely genuine and coherent thesis is wrong and requires "fact and links", than it is no my problem. Continue to hope that what I wrote is totally rubbish and that you are the one with the right answer to this particular situation. However what I said is completely coherent with t e facts that we have been witnessing from GCN launch up to now.
" Have you seen the total cost of production and the relative net income? " no.. thats is why i asked you to post where you got this from, so i can see the same info, what part do YOU not understand ? " based on facts that AMD just made $28 of NET income last quarter" because you continue to refuse to even mention where you get these supposed facts from, i think your facts, are false. " One is that AMD is discounting every product (GPU and CPU) to a ridiculous margin" and WHERE have you read this??? has any one else read this, and can verify it?? i sure dont remember reading anything about this, any where
what does being hypnotized have to do with anything ?? do you even know what hypnotize means ? just in case, this is what it means : to put in the hypnotic state. to influence, control, or direct completely, as by personal charm, words, or domination: The speaker hypnotized the audience with his powerful personality.
again.. resorting to being insulting means you are just immature and childish....
look.. you either post links, or mention where you are getting your facts and info from, so i can also see the SAME facts and info with out having to spend hours looking.. so i can make the the same conclusions, or, admit, you CANT post where you get your facts and info from, because, they are just your opinion, and nothing else.. but i guess asking a child to mention where they get their facts and info from, so one can then see the same facts and info, is just to much to ask...
Kid, as I said you lack basic intelligence to recognize when you are just arguing about nothing. The number I'm using are those published by AMD and nvidia in their quarter results. Now, if you are asking me the links for those reports it means you don't have the minimum idea of what I'm talking about AND you cannot do a simple search with Google. So I stand my "insults": you have not the intelligence to argue about this simple topic, so stop writing completely on this site that has much better readers than you and is not gaining anything by your presence.
ahh here are the insults and name calling... and you are calling me a kid ??
i can, and have done a simple google search.. BUT, i would like to see the SAME info YOU are looking at as well, but again.. i guess that is just too much to ask of you, is it wrong to want to be able to compare the same facts as you are looking at ? i guess so.. cause you STILL refuse to post where you get your facts and info from, i sure dont have the time so spend who knows how long to do a simple google search...
by standing by our insults, just shows YOU are the child here.. NOT me, as only CHILDREN resort to insults and name calling...
as i said in my reply farther down : you refuse post links, OR mention your sources, simply because YOU DONT HAVE ANY.. most of what you post.. is probably made up, or rumor, if AT posted things like you do, with no sources, you probably would be all over them asking for links, proof and the like... and by YOUR previous posts, all of your info is made up and false..
maybe YOU should stop posting your info and facts from rumor sites, and learn to talk with some intelligence your self.
sorry CiccioB, but i agree with Korguz.. i have tried to find the " facts " on some of the things you have posted in this thread.. and i cant find them.. i also would like to know where you get your " opinions " from.
Quarter results! It's not really difficult to find them.. Try "AMD quarter results" and then "nvidia quarter results" in Google search engine and.. voilà, les jeux sont faits. Two clicks and you can read them. Back some years if you want,so you can have a history of what has happened during the last years apart the useless comments by fanboys you find on forums. Now, if you have further problems at understanding all those tables and numbers or you do not know what is a gross margin vs a net income, then, you can't come here and argue I have no facts. It's you that can't understand publicly available data.
I wonder what you were looking for for not finding those numbers that have been commented by every site that is about technology.
@Korguz You a re definitely a kid. You surely do not scare me with all those nonsense you write when the solution for YOUR problem (not mine) was simply to read more and write less.
CiccioB you are hilarious !!! did you look up the word hypnotize, to see what it means, and how it even relates to this ? as, and i quote YOU " Or you may try to hypnotize the third view " what does that even mean ??
i KNEW the ONLY link you would mention.. is the EASY one to find.
BUT... what about all of these LIES :
" GCN was dead at its launch time" " 9 years of discounted sell are not enough to show you that GCN simply started as half a generation old architecture to end as a generation obsolete one? " " that is Hawaii, which was so discounted " " starting with Fiji and it's monster BOM cost " " AMD is selling Ryzen CPU at a discount like GPUs and both have a 0.2% net margin " " One is that AMD is discounting every product (GPU and CPU) to a ridiculous margin " " a "panic plan" that required about 3 years to create the chips. 3 years ago they already know that they would have panicked at the RTX cards launch and so they made the RT-less chip as well "
i did the simple google search for the above comments from you, as well as variations.. and guess what.. NOTHING comes up. THESE are the links i would like you to provide, as i cant find any of these LIES . so " It's you that can't understand publicly available data. " the above quotes, are not publicly available data, even your sacred " simple google search " cant find them.
lastly.. your insults and name calling ( and the fact that you stand by them ).. the only people i hear things like this from.. are from my friends and coworkers TEENAGE CHILDREN. NOT adults.. adults don't resort to things like this, at least the ones that i know... this alone.. shows how immature and childish you really are... i am pretty sure.. you WILL never post links to 98% of the LIES, RUMORS, or your personal speculation, and opinions, because of the simple fact, you just CAN'T, as your sources for all this... simply doesn't exist.
when you are able to reply to people with out having to resort to name calling and insults, then maybe you might be taken seriously. till then... you are nothing but a lying, immature child, who needs to grow up, and learn how to talk to other people in a respectful manner... maybe YOU should take your OWN advice, and simply read more and write less. Have a good day.
In what way does 12nm FFN improve over 16nm? The transistor density is roughly the same, the frequencies see little to no improvement and the power-efficiency has only seen small improvements. Worse yet, the space used per SM has gotten worse. I do know that Turing brings architectural improvements, but are they at the cost of die space? Seems odd that Nvidia wouldn't care about die area when their flagships are huge chips that would benefit from a more dense architecture.
Or could it be that Turing adds some kind of (sparse) logic that they haven't mentioned?
Not only that. With Turing you also get mesh shading and a better support for thread switching, which is a awful technique used on GCN to improve its terrible efficiency, having lots of "bubbles" in the pipelines. That's the reason you see previous AMD optimized games that didn't run too well with Pascal work much better with Turing, as the high threaded technique (the famous AC which is a bit overused in engines created for the console HW) is not going to constantly stall the SM with useless work as that of frequent task switching.
“Worse yet, the space used per SM has gotten worse“. not true.. you know, turing have separate cuda cores for int and fp. It means when turing have 1536 cuda cores means 1536 int + 1536 fp cores. So on die size actually turing have 2x cuda cores compare to pascal
Not exactly, the number of CUDA core are the same, just that a new independent ALU as been added. A CUDA core is not only an execution unit, it also registers, memory (cache), buses (memory access) and other special execution units (load/store). By adding a new integer ALU you don't automatically get double the capacity as really doubling the number of a complete CUDA core.
This has proven to be one of NVIDIA's bigger advantages over AMD, an continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need. Missing d as in "and": This has proven to be one of NVIDIA's bigger advantages over AMD, and continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need. so we've only seen a handful of games implement (such as Wolfenstein II) implement it thus far. Double implement, 1 befor ()s and 1 after: so we've only seen a handful of games (such as Wolfenstein II) implement it thus far.
For our games, these results is actually the closest the RX 590 can get to the GTX 1660 Ti, Use "are" not "is": For our games, these results are actually the closest the RX 590 can get to the GTX 1660 Ti,
This test offers a slew of additional tests - many of which use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates. Missing "we" (I suspect that the sentence should be reconstructed without the "-"s, but I'm not that good.): This test offers a slew of additional tests - many of which we use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.
"Looking at temperatures, there are no big surprises here. EVGA seems to have tuned their card for cooling, and as a result the large, 2.75-slot card reports some of the lowest numbers in our charts, including a 67C under FurMark when the card is capped at the reference spec GTX 1660 Ti's 120W limit." I think this could be clarified as their are 2 EVGA cards in the charts and the one at 67C is not explicitly labeled as EVGA.
I don't think they are confusing, 16 is between 10 and 20, plus the RTX is extra differentiation. In fact if NVIDIA had some cards in the 20 series with RTX capability and some cards in 20 series without RTX capability, even if some were 'GTX' and some were 'RTX', then that would be far more confusing. Putting the non-RTX Turing cards in their own series is a way of avoiding confusion. But if they actually come out with an "1180" as say some rumors floating around, that would be very confusing.
Interesting to see the next year. Rtx 3050 and gtx 2650ti for the weaker version, if we get one new card rtx family... Hmm... that could work if They keep the naming. 2021 RTX3040 and gtx 2640ti...
Why disable all AMD or NVidia specific settings? Any using those cards would have those settings on... shouldn't the number reflect exactly what the cards are capable of when utilizing all the settings available. You wouldn't do a Turing Major review without giving some numbers for RTX ON in any benchmarks that supported it...
Yes, the test could be done with specific GPU features turned on, but you have to clearly say what are the advantage of each particular addition on the final image quality. Because you can have (optional) effects that cuts frame rate but increase the quality a lot. So looking only at the mere final number you may conclude that a GPU is better than another because it is faster (or just costs less), but in reality you are comparing two different kind of quality results. It's not different than testing two cards with different detail settings (without stating which they are) and then trying to understand which is the better one only based on the frame rate results (which is the kind of results that everyone looks at).
load power consuption is wrong,if you want see only gpu measured,measured only gpu like techpowerup doing. its clear if you measure total load,its not show it right.
134W 1660ti 292W vega 56 source:techpowerup
its clear that gtx 1660 ti is much much better gpu for at least FHD and QHD also.
Well, that however does not tell the entire story. The ratios versus the total consumption of the system is also important. Let's say that for a gaming PC you already have to use 1000W. A card that suck 100W more just wastes 10% more of your power. Meanwhile if your PC is using 100W, such a card will be doubling the consumption. As you see the card is always using 100W more, but the impact is different.
Let's make a different example: your PC uses about 150W in everyday use. You have to buy an SSD. There are some SSD that consumes twice the power of others for the same performances. You may say that the difference is huge. Well, an SSD consumes between 2 and 5W. Buying the less efficient (5W) is not really going to have an impact on the total consumption of your PC.
Price is still crap for the performance. We live in a age now that sees Hardware and software no longer growing. And a GPU from 2012 Can still run all modern games today. Market is not going to be huge for Overpriced GPUs that are really not that much of a improvement from 2012.
Given that 21:9 monitors are also making great inroads into the gamer's purchase lists, can benchmark resolutions also include 2560.1080p, 3440.1440p and (my wishlist) 3840.1600p benchies??
That's how you right it, and the "p" should not be used when stating the full resolution, since it's only supposed to be used for denoting video format resolution.
P.S. using 1080p, etc. for display resolutions isn't technically correct either, but it's too late for that.
Why can't they just make a GTX 2080Ti with the same performance as RTX 2080Ti but without useless RT and dlss and charge something like 899 usd (still 100 bucks more than gtx 1080ti)? i bet it will sell like hotcakes or at least better than their overpriced RTX2080ti
Why do I feel like this was a panic plan in an attempt to bandage the bleed from RTX failure? No support at launch and months later still abysmal support on a non-game changing and insanely expensive technology.
Yes, a "panic plan" that required about 3 years to create the chips. 3 years ago they already know that they would have panicked at the RTX cards launch and so they made the RT-less chip as well. They didn't know that the RT could not be supported in performance with the low number of CUDA core low level cards have. They didn't know that the concurrent would have played with the only weapon it was left to it to battle, that is prize as they could not think that the concurrent was not ready with a beefed up architecture capable of the sa functionalities. So, yes, they panicked for sure. They were not prepared to anything of what is happening,
" that required about 3 years to create the chips. 3 years ago they already know that they would have panicked at the RTX cards launch and so they made the RT-less chip as well. They didn't know that the RT could not be supported in performance with the low number of CUDA core low level cards have. "
and where did you read this ? you do understand, and realize... is IS possible to either disable, or remove parts of an IC with out having to spend " about 3 years " to create the product, right ? intel does it with their IGP in their cpus, amd did it back in the Phenom days with chips like the Phenom X4 and X3....
So they created a TU116, a completely new die without RT and Tensor Core, to reduce the size of the die and lose about 15% of performance with respect to the 2060 all in 3 months because they panicked? You probably have no idea of what are the efforts to create a 280mm^2 new die. Well, by this and your previous posts you don't have idea of what you are talking about at all.
and again.. WHERE do you get your info from ?? they can remove parts of IC's, or disable them, have you not been reading the articles on here about the disabled IGPs in intels cpus, but still charging the SAME price as the fully enabled ones ?
you refuse post links, OR mention your sources, simply because YOU DONT HAVE ANY.. IMO.. most of what you most.. is probable made up, or rumor, if AT posted things like you do, with no sources, you probably would be all over them asking for links, proof and the like... and by YOUR previous posts, all of your info is made up and false..
there is no point talking to a CHILD any more... when are you going to resort to name calling and insults again ?
Last page, I don't think comparing the 1660ti to the 1060 6Gb is appropriate, either the 3GB or 1050Ti. Comparing it to the 1060 makes it look like Nivdia isn't raising prices as much as they really are.
I'm basically out of the GPU market unless and until pricing changes. Not that any good games have come out in the last few years, or are scheduled to. But I should be able to run 3 monitors at 1080p with 60fps minimum in any modern game for $200. Based on the numbers here, I don't think this $300 1660Ti could even do that, and we're already over the threshold by $100.
You are right about not caring about RTX. Basically the timing was just really bad for it, global economy is in contraction. Moore's law is dead, I guess that's why they're trying some other form of value add, but charging consumers isn't the way to do it. Labor participation rate is barely above 60%, over 1/3rd of the country is unemployed. Wages have stagnated for 70 years! We don't have any more to give!
Does anyone think EVGA could add just a bit more depth to that card? What is it? A 3 slot? At least 2.5. It's either a portable furnace or idiotic overkill.
As a SFFPC (mITX) user, I'm enjoying the thicker, but shorter, card as it makes for easier packaging. Additionally, I'm enjoying the performance of a 1070 at reduced power consumption (20-30w) and therefore noise and heat! https://rottenhayato.com/_udata/gsnn/tenor-369.gif
if somebody hasnt upgraded in a while (9 series or older or 300 series or older for amd) then one of these cards is ok. not great but adequate. if you can last with what you have or if you would be happy with used or a refurb then go that route or wait for real nextgen(zotac has refurbed 1070 tis for 269 and they overclock to 1080 level performance).
nvidia wanted to put these on 7nm or at least 10nm. 10nm isnt worth it in terms of performance and density (its more of a cell phone node) and 7nm needs EUV ot make large dies. its the waiting game. once EUV comes (if it does) the we will see a spurt of card gens coming quicker like they used to and then another slow down after about 5nm maybe
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
157 Comments
Back to Article
C'DaleRider - Friday, February 22, 2019 - link
Good read. Thx.Opencg - Saturday, February 23, 2019 - link
gtx at rtx prices. not really a fan of that graph at the end. I mean 1080 ti were about 500 about half a year ago. the perf/dollar is surely less than -7% more like -30%. as well due to the 36% perf gain quoted being inflated as hell. double the price and +20% perf is not -7% anandeddman - Saturday, February 23, 2019 - link
They are comparing them based on their launch MSRP, which is fair.Actually, it seems they used the cut price of $500 for 1080 instead of the $600 launch MSRP. The perf/$ increases by ~15% if we use the latter, although it's still a pathetic generational improvement, considering 1080's perf/$ was ~55% better than 980.
close - Saturday, February 23, 2019 - link
In all fairness when comparing products from 2 different generations that are both still on the market you should compare on both launch price and current price. The purpose is to know which is the better choice these days. To know the historical launch prices and trends between generation is good for conformity but very few readers care about it for more than curiosity and theoretical comparisons.jjj - Friday, February 22, 2019 - link
The 1060 has been in retail for 2.5 years so the perf gains offered here a lot less than what both Nvidia and AMD need to offer.They are pushing prices up and up but that's not a long term strategy.
Then again, Nvidia doesn't care much about this market, they are shifting to server, auto and cloud gaming. In 5 years from now, they can afford to sell nothing in PC, unlike both AMD and Intel.
jjj - Friday, February 22, 2019 - link
A small correction here, there is no perf gain here at all, in terms of perf per dollar.D. Lister - Friday, February 22, 2019 - link
Did you actually read the article before commenting on it? It is right there, on the last page - 21% increase in performance/dollar, which added with the very decent gain in performance/watt would suggest the company is anything but just sitting on their laurels. Unlike another company, which has been brute-forcing an architecture that is more than a decade old, and squandering their intellectual resources to design budget chips for consoles. :Pshabby - Friday, February 22, 2019 - link
We didn't wait 2.5 years for such a meager performance increase. Architecture performance increases were much higher before Turing, Nvidia is milking us, can't you see?Smell This - Friday, February 22, 2019 - link
DING !I know it's my own bias, but branding looks like a typical, on-going 'bait-and-switch' scam whereby nVidia moves their goal posts by whim -- and adds yet another $100 in retail price (for the last 2 generations?). For those fans who spent beeg-buckeroos on a GTX 1070 (or even a 1060 6GB), it's The Way You Meant to Be 'Ewed-Scrayed.
haukionkannel - Saturday, February 23, 2019 - link
Do you remember how much cpus used to improve From generation to generation... 3-5%...That was when there was no competition. Now when there is competition we see 15% increase between generations or less. Well come to the future of GPUs. 3-5 % of increase between generations if there is not competition. Maybe 15 or less if there is competition. The good point is that you can keep the same gpu 6 year and you have no need to upgrade and lose money.
peevee - Monday, February 25, 2019 - link
What are you talking about? 3-5% Look at 960 vs 1060, right there in this article. About 100%!Smell This - Monday, February 25, 2019 - link
Uhhh, Peewee?Get back to me when you compare the GTX 960 2GB specs to the 6GB GTX 1660 Ti specs (and the GTX 1060 3/6GB specs, as well).
I know it's hard for you. It's tough to hit all those moving targets (and goal posts) ...
eva02langley - Friday, February 22, 2019 - link
Not impressive at all when the Vega 56 sold for the same price with 3 AAA games and offering 15%+ performances.cfenton - Friday, February 22, 2019 - link
"Instead, AMD’s competitor for the GTX 1660 Ti looks like it will be the Radeon RX Vega 56. The company sent word last night that they are continuing to work with partners to offer lower promotional prices on the card, including a single model that was available for $279, but as of press time has since sold out. Notably, AMD is asserting that this is not a price drop, so there’s an unusual bit of fence sitting here; the company may be waiting to see what actual, retail GTX 1660 Ti card prices end up like. So I’m not wholly convinced we’re going to see too many $279 Vega 56 cards, but we’ll see."If Vega 56 becomes available for $279 regularly, then it will be a better deal. Right now, that price is only being offered on one model that you can't buy. The cheapest Vega 56 model on Newegg is $399 right now.
eva02langley - Friday, February 22, 2019 - link
You can get it for the same price, move on, it is the better deal. Also, expect higher price than MSRP from OEM creating special models with different coolers.And basically... RX 3080 is supposed to be between 250-300$ with Vega 64 + 15% performances. The interest of this card is going to be short lived.
MadManMark - Friday, February 22, 2019 - link
Yes, we know -- AMD's next card is always going to be the one to buy.But isn't it odd that by the time it stops being the "next" card and becomes the current card, suddenly it isn't that appealing anymore, folks like you immediately move onto the next "nest" card? Vega was the "next" card a year ago ...
cfenton - Saturday, February 23, 2019 - link
Where? I can't find one at that price anywhere, while there are several 1660TIs in stock at $279.Retycint - Tuesday, February 26, 2019 - link
Not anymore, no. As far as I've checked, all the $279 Vega 56s have sold out at the moment, and with AMD stating that it's a temporary price cut, I'm not expecting anymore $279 Vegas to come our way.Cellar Door - Friday, February 22, 2019 - link
You should try reading the actual article sometimes - once again 'jjj' with a pointless comment of the day.Ushio01 - Saturday, February 23, 2019 - link
It costs what a GTX1060 did at launch and offers GTX1070 performance which still costs more.Midwayman - Friday, February 22, 2019 - link
I feel like they don't realize that until they improve the performance per $$$ there is very little reason to upgrade. I'm happy sitting on an older card until that changes. Though If I were on a lower end card I might be kicking myself for not just buying a better card years ago.eva02langley - Friday, February 22, 2019 - link
Since the bracket price moved up so much for relative performance at higher price point from the last generation, there is absolutely no reason for upgrading. That is different if you need a GPU.zmatt - Friday, February 22, 2019 - link
Agreed. It's kind of wild that I have to pay $350 to get on average 10fps better than my 980ti. If I want a real solid performance improvement I have to essentially pay the same today as when the 980ti was brand new. The 2070 is anywhere between $500-$600 right now depending on model and features. IIRC the 980ti was around $650. And according to Anantech's own benchmarks it gives on average 20fps better performance. That 2 generations, 5 years and I get 20fps for $50 less? No. I should have a 100% performance advantage for the same price by this point. Nvidia is milking us. I'm eyeballing it a bit here but the 2080Ti is a little bit over double the performance of a 980Ti. It should cost less than $700 to be a good deal.Samus - Friday, February 22, 2019 - link
I agree in that this card is a tough sell over a RTX2060. Most consumers are going to spend the extra $60-$70 for what is a faster, more well-rounded and future-proof card. If this were $100 cheaper it'd make some sense, but it isn't.PeachNCream - Friday, February 22, 2019 - link
I'm not so sure about the value prospects of the 2070. The banner feature, real-time ray tracing, is quite slow even on the most powerful Turing cards and doesn't offer much of a graphical improvement for the variety of costs involved (power and price mainly). That positions the 1660 as a potentially good selling graphics card AND endangers the adoption of said ray tracing such that it becomes a less appealing feature for game developers to implement. Why spend the cash on supporting a feature that reduces performance and isn't supported on the widest possible variety of potential game buyers' computers and why support it now when NVIDIA seems to have flinched and released the 1660 in a show of a lack of commitment? Already game studios have ditched SLI now that DX12 pushed support off GPU companies and into price-sensitive game publisher studios. We aren't even seeing the hyped up feature of SLI between a dGPU and iGPU that would have been an easy win on the average gaming laptop due in large part to cost sensitivity and risk aversion at the game studios (along with a healthy dose of "console first, PC second" prioritization FFS).GreenReaper - Friday, February 22, 2019 - link
What I think you're missing is that the DirectX rendering API set by Microsoft will be implemented by all parties sooner or later. It really *does* met a need which has been approximated in any number of ways previously. Next generation consoles are likely to have it as a feature, and if so all the AAA games for which it is relevant are likely to use it.Having said that, the benefit for this generation is . . . dubious. The first generation always sells at a premium, and having an exclusive even moreso; so unless you need the expanded RAM or other features that the higher-spec cards also provide, it's hard to justify paying it.
alfatekpt - Monday, February 25, 2019 - link
I'm not sure about that. It is also an increase in thermals and power consumption that also costs money overtime. RTX advantage is basically null at that point unless you want to play at low FPS so 2060 advantage is 'merely' raw performance.For most people and current games 1160 already offers ultra great performance so not sure if people gonna shell out even more money for the 2060 since 1160 is already a tad expensive.
1160 seems to be an awesome combination of performance and efficiency. Would it be better $50 lower? of course but why? since they don't have real competition from AMD...
Strunf - Friday, February 22, 2019 - link
Why would nvidia give up of a market that costs them almost nothing ? if 5 years from now they do cloud gaming then they pretty much are still doing GPU.Anyways even in 5 years cloud gaming will still be a minor part of the GPU market.
MadManMark - Friday, February 22, 2019 - link
"They are pushing prices up and up but that's not a long term strategy."That comment completely ignores the massive increase in value over both the RX 590 and Vega 56. Nividia produces a card that both makes the RX590 at the same pricepoint completely unjustifiable, and prompts AMD to cut the price of the Vega 56 in HALF overnight, and you are saying that it is *Nvidia* not *AMD* that is charging high prices?!?! I've always thought the AMD GPU fanatics who think AMD delivers more value were somewhat delusional, but this comment really takes the cake.
eddman - Saturday, February 23, 2019 - link
It's not about AMD. The launch prices have clearly been increased compared to previous gen nvidia cards.Even this card is $30 more than the general $200-250 range.
Retycint - Tuesday, February 26, 2019 - link
AMD selling overpriced cards does not subtract from the point that Nvidia is also attempting to raise the price as well. Both companies have put out underwhelming products this genRocket321 - Friday, February 22, 2019 - link
"finally puts a Turing card in competition with their Pascal cards" should say Polaris.Ryan Smith - Friday, February 22, 2019 - link
Boy I can't wait for Navi, since it sounds nothing like Turing...Thanks!
Kogan - Friday, February 22, 2019 - link
Aww, I was hoping this release would lower the price on those used 1070's. Oh well. I'll still probably go for a used 1070 over this one. Nearly identical in every way and can be found for as low as $200.Hamm Burger - Friday, February 22, 2019 - link
Reading "Turing Sheds" in the headline makes me wonder what he could have done with a couple of these at Bletchley Park (which, for anybody passing, is well worth the steep entry fee — see bletchleypark.org.uk).Sorry for the interruption. I'll return you to the normal service.
Colin1497 - Friday, February 22, 2019 - link
"Now the bigger question in my mind: why is it so important to NVIDIA to be able to dual-issue FP32 and FP16 operations, such that they’re willing to dedicate die space to fixed FP16 cores? Are they expecting these operations to be frequently used together within a thread? Or is it just a matter of execution ports and routing?"It seems pretty likely that they added the FP16 cores because it simplified design, drivers, etc. It was easier to just drop in a few (as you mentioned) tiny FP16 cores than it was to change behavior of the architecture.
CiccioB - Friday, February 22, 2019 - link
FP16 is a way to simplify shading computing over the common used FP32.They allow for higher bandwidth (x2) and higher speed (x2, so half the energy for the same work) with the same HW space occupation. It was a feature used in HPC where bandwidth, power consumption and of course computation time are quite critical. They then ended in game class architecture just because they have find a way to exploit it there too.
Some games have started using FP16 for their shading. On AMD fence, only Vega class cards support packed FP16 math.
The use of a INT ALU that executes integer instructions together with the FP ones is instead an exclusive feature that can really improve shading performance much more than any other complex feature like high threaded (constantly interrupted) mechanism that is needed on architectures that cannot keep the ALUs feed.
In fact we see that with less CUDA cores Turing can do the same work of Pascal even using less energy. And no magic ACE is present.
Yojimbo - Friday, February 22, 2019 - link
They didn't just drop in a few. It seems they have enough for 2x FP32 performance. Why are they dual issue? My guess is it is because that is what's necessary for Tensor Core operation. I think NVIDIA is being a bit secretive about the Tensor Cores. It's clear they took the RT Core circuitry out of the Turing minor die. As far as the Tensor Cores, I'm not so sure. Think about it this way: suppose Tensor Cores really are specialized separate cores. Then they also happen to have the capability of non tensor FP16 operation in dual issue with FP32 CUDA cores? Because if they don't then whatever functionality NVIDIA has planned for the FP16 cores on Turing minor would be incompatible with Turing major and Volta. I don't see how that can be the case, however, because, according to this review, Turing major is listed as the same CUDA compute generation as Turing minor. Now if the Tensor Cores can double as general purpose FP16 CUDA cores, then what's to say that FP16 and FP32 CUDA cores can't double as Tensor Cores? That is, if the Tensor Core can be made with two data flow paths, one following general purpose FP16 operations and one following Tensor Core instruction operations, then commutatively a general purpose CUDA core can be made with two data flow paths, one following general purpose operations and one following Tensor Core instruction operations.When Turing came out with Tensor Core operations but with FP64 cores cut from the die and no increase in FP32 CUDA cores per SM over Volta I was surprised. But with this new information from the Turing Minor launch it makes more sense to me. I don't know if they have the dedicated FP16 cores on Volta. If they do then the FP64 cores don't need to play the following role, but if they are able to use the FP64 cores as FP16 cores then hypothetically they have enough cores to account for the 64 FMA operations per clock per SM of the 8 Tensor Cores per SM. But on Turing major they just didn't have the cores to account for the Tensor Core performance. These FP16 cores on Turing minor seem to be exactly what would be necessary to make up for the shortfall. So, my guess is that Turing major also has these same cores. The difference is either entirely one of firmware/drivers that allows the Tensor Core data path to be operated on Turing major but not Turing minor or Turing major has some extra circuitry that allows the CUDA cores to be lashed together with an alternate data flow path that doesn't exist in Turing minor.
GreenReaper - Friday, February 22, 2019 - link
Agreed. It seems likely that most of the hardware is present, just not active.Frankly, it's not clear why these couldn't be binned versions of the higher-level chips that haven't met the QA requirements, which would be one reason it took this long to release - you need enough stock to be able to distribute it. If it's planned out in advance, they just need X good CUDA cores and Y ROPs that run at Z Mhz, combined with at least [n] MB of cache. Fuse off the bad or unwanted portions to save on power and you're good.
Of course it *could* be like Intel, which truly make smaller derivatives. If so that suggests they'll be selling a lot of these cards. Even then, though, Yojimbo's supposition about the core design being essentially the same is likely to be true.
Yojimbo - Saturday, February 23, 2019 - link
Yeah the die size and transistor count is still large for the number of CUDA cores, being that this review claims the 1660Ti has all SMs on the TU116 enabled. I said it was clear they took RT circuitry out. But I was wrong, that's not clear. It seems the die area per CUDA core and transistors per CUDA core of the TU116 are extremely close to the TU106, which is fully-enabled in RTX 2070. If this is the result of the INT32 and FP16 cores of the TU116 then where exactly do any cost savings of removing the Tensor Cores and RT Cores come from? Definitely the cost of completely re-architecting another GPU would outweigh the slight reduction in die size they seem to have achieved.On the other hand, I'd imagine TU116 will be such a high volume part that unless yields are really lousy, binning alone won't provide enough chips (and where are the fully enabled versions of the 284 mm^2 RTX dies going, anywhere? No such product has thus far been announced.) Perhaps such a small number of RT cores was judged to be insufficient for RTX gaming. Even if not impossible to create some useful effects including that many RT cores, if developers were incentivized to target such few RT cores with their RTX efforts because the volume of such RT-enabled cards was significant then they may reduce the scope and scale of RTX enhancements they undertake, putting a drag on the adoption of the technology. So NVIDIA opted to disable the RT cores, and perhaps the Tensor Cores, present on the dies even when they are actually fully functioning. Perhaps it was simply cheaper to eat the wasted die space per chip than to design an entirely new GPU with the RT cores and Tensor Cores removed.
Yojimbo - Saturday, February 23, 2019 - link
My guess is that in the next (7 nm) generation, NVIDIA will create the RTX 3050 to have a very similar number of "RTX-ops" (and, more importantly, actual RTX performance) as the RTX 2060, thereby setting the capabilities of the RTX 2060 as the minimum targetable hardware for developers to apply RTX enhancements for years to come.Yojimbo - Saturday, February 23, 2019 - link
I wish there were an edit button. I just want to say that this makes sense, even if it eats into their margins somewhat in the short term. Right now people are upset over the price of the new cards. But that will pass assuming RTX actually proves to be successful in the future. However, if RTX does become successful but the people who paid money to be early adopters for lower-end RTX hardware end up getting squeezed out of the ray-tracing picture that is something that people will be upset about which NVIDIA wouldn't overcome so easily. To protect their brand image, NVIDIA need a plan to try to make present RTX purchases useful in the future being that they aren't all that useful in the present. They can't betray the faith of their customers. So with that in mind, disabling perfectly capable RTX hardware on lower end hardware makes sense.u.of.ipod - Friday, February 22, 2019 - link
As a SFFPC (mITX) user, I'm enjoying the thicker, but shorter, card as it makes for easier packaging.Additionally, I'm enjoying the performance of a 1070 at reduced power consumption (20-30w) and therefore noise and heat!
eastcoast_pete - Friday, February 22, 2019 - link
Thanks! Also a bit disappointed by NVIDIA's continued refusal to "allow" a full 8 GB VRAM in these middle-class cards. As to the card makers omitting the VR required USB3 C port, I hope that some others will offer it. Yes, it will add $20-30 to the price, but I don't believe I am the only one who's like the option to try some VR gaming out on a more affordable card before deciding to start saving money for a full premium card. However, how is VR on Nvidia with 6 GB VRAM? Is it doable/bearable/okay/great?eastcoast_pete - Friday, February 22, 2019 - link
"who'd like the option". Google keyboard, your autocorrect needs work and maybe some grammar lessons.Yojimbo - Friday, February 22, 2019 - link
Wow, is a USB3C port really that expensive?GreenReaper - Friday, February 22, 2019 - link
It might start to get closer once you throw in the circuitry needed for delivering 27W of power at different levels, and any bridge chips required.OolonCaluphid - Friday, February 22, 2019 - link
>However, how is VR on Nvidia with 6 GB VRAM? Is it doable/bearable/okay/great?It's 'fine' - the GTX 1050ti is VR capable with only 4gb VRAM, although it's not really advisable (see Craft computings 1050ti VR assessment on youtube - it's perfectly useable and a fun experience). The RTX 2060 is a very capable VR GPu, with 6gb VRAm. It's not really VRAM that is critical in VR GPU performance anyway - more the raw compute performance in rendering the same scene from 2 viewpoints simultaneously. So, I'd assess that the 1660ti is a perfectly viable entry level VR GPU. Just don't expect miracles.
eastcoast_pete - Saturday, February 23, 2019 - link
Thanks for the info! About the miracles: Learned a long time ago not to expect those from either Nvidia or AMD - fewer disappointments this way.cfenton - Friday, February 22, 2019 - link
You don't need a USB C port for VR, at least not with the two major headsets on the market today.PeachNCream - Friday, February 22, 2019 - link
This article reads a little like that infamous Steve Ballmer developers thing except it's not "developers, developers, developers, etc" but "traditional, traditional, traditionally, etc." instead. Please explore alternate expressions. The word in question implies long history which is something the computing industry lacks and the even shorter time periods referenced (a GPU generation or two) most certainly lack so the overuse stands out like a sore thumb in many of Anandtech's publications.Oxford Guy - Saturday, February 23, 2019 - link
How about the utterly asinine use of the word "kit" to describe a set of RAM sticks that simply snap into a motherboard?The Altair 8800 was a kit. The Heathkit H8 was a kit. Two sticks of RAM that snap into a board doth not a kit maketh.
futurepastnow - Friday, February 22, 2019 - link
A triple-slot card? Really, EVGA?PeachNCream - Friday, February 22, 2019 - link
Yup, for 120W TDP of all things. But it's in the charts as a 2.75 slot width card so EVGA is probably hoping that no one understands how expansion slots actually would not permit the remaining .25 slot width to support anything.darckhart - Friday, February 22, 2019 - link
lol this was my first thought upon seeing the photo as well.GreenReaper - Saturday, February 23, 2019 - link
I suspect it was the cheapest way to get that level of cooling. A more compact heatsink-fan combo could have cost more.130W (which is the TDP here) is not a *trivial* amount to dissipate, and it's quite tightly packed.
Oxford Guy - Saturday, February 23, 2019 - link
I think all performance GPUs should be triple slot. In fact, I think the GPU form factor is ridiculously obsolete.Oxford Guy - Monday, February 25, 2019 - link
Judging by techpowerup's reviews, though, the EVGA card's cooling is inefficient.eastcoast_pete - Friday, February 22, 2019 - link
@Ryan and Nate: What generation of HDMI and DP does the EVGA card have/support? Apologize if you had it listed and I missed it.Ryan Smith - Friday, February 22, 2019 - link
HDMI 2.0b, DisplayPort 1.4.rwsgaming - Friday, February 22, 2019 - link
Awesome review but you guys always missed the target audience. Lots of gamers are looking for the benchmarks of online games like PUBG, Fortnite, Apex, Overwatch, etc...dezonio2 - Friday, February 22, 2019 - link
Multiplayer only games are pretty hard to consistently benchmark and get a repeatable results.DominionSeraph - Friday, February 22, 2019 - link
Anandtech is a highly technical hardware review site, not a pop culture gaming site. The benchmarks are meant to be a highly repeatable, representative sample. Online multiplayer-only games are rarely repeatable run to run due to netcode and load variations, and you can often only run on the latest patch meaning you can't make an apples to apples comparison with older tests.Cooe - Friday, February 22, 2019 - link
*facepalm* You're not the target audience. AnandTech isn't a gaming website... It's literally in the name lol. (*cough* "Tech" *cough*)Korguz - Friday, February 22, 2019 - link
rwsgaming" but you guys always missed the target audience. Lots of gamers are looking for the benchmarks of online games like PUBG, Fortnite, Apex, Overwatch, etc..."
none of the games they use for testing.. are ones i play.. so meh... hehehhehe
29a - Friday, February 22, 2019 - link
I started reading this article until the SSD buyers guide video started taking up 1/4 of my screen space after scrolling down a bit. I'll read about the card on a site that doesn't take up so much of my screen space for something I have no interest in. This site sucks so much since Anand sold it.PeachNCream - Sunday, February 24, 2019 - link
Reading Anandtech without an ad blocker is like banging a hooker without wearing a condom.AustinPowersISU - Friday, February 22, 2019 - link
So it's a GTX 1070 with 2GB less RAM. The small difference in power consumption can be explained away by having 2 more GB of RAM.Go to eBay, buy a 1070 for $200. Smile because you have the same performance, 2GB more RAM, and $80 more in your pocket.
Oxford Guy - Saturday, February 23, 2019 - link
Are they really that cheap? Ebay is flooded with absurdly high prices. Every time I see a deal it's already in the sold listing section.It's very irritating to deal with Ebay because of this.
Oxford Guy - Saturday, February 23, 2019 - link
There are also tons of sellers who don't understand the basics of static electricity. They love to take glamour shots of cards on tables and carpets.eva02langley - Friday, February 22, 2019 - link
At 280$ for a Vega 56 with 3 games, it is brainless and one of the best value as of late. Can't wait for Navi to disrupt even more this overdue stagnant market.CiccioB - Friday, February 22, 2019 - link
Yes, it will be a new black hole in AMD quarters if the production cost/performance is the same as the old GCN line...You see, selling as HBM monster like Vega for that price simply means that the project is a compete flop (as it was Fiji) and nvidia can continue selling its mainstream GPU at the price they want despite the not so good market period.
eva02langley - Friday, February 22, 2019 - link
Final Fantasy XV is another game gimping AMD due to gameworks implementation.eddman - Friday, February 22, 2019 - link
They disable those before benchmarking. From the article: "For our testing, we enable or adjust settings to the highest except for NVIDIA-specific features"CiccioB - Friday, February 22, 2019 - link
All games gimp nvidia s their engine is written for the consoles that mount obsolete AMD HW.Oxford Guy - Saturday, February 23, 2019 - link
It's hardly difficult to add in a bit of special slowdown sauce for the "PC" versions.Comagnum - Friday, February 22, 2019 - link
This is such a joke. Vega 56 is now the same price and out performs this terrible product, and the 1070 (AIB versions) performs similarly enough that the 1660ti has no real place in the market right now. Nvidia is a greedy terrible company. What a joke.Falcon216 - Friday, February 22, 2019 - link
I followed your advice and bought a Vega56 instead of a 1660Ti and now my power supply has been making those weird noises animals make wen they're suffering and need help what do I do?Cooe - Friday, February 22, 2019 - link
Fanboy nonsense alert!!! Unless you bought your power supply at a Chinese flea market, ignore this dude.(Granted there are totally cases where you'd want something like a 1660Ti over a V56 for efficiency reasons [say ultra SFF], but this guy's spitting nonsense)
Falcon216 - Friday, February 22, 2019 - link
My point========
Your Head
The V56 uses ~200w nominally depending on your choice of settings, in the detailed Tom's review it goes as low as 160w at the most minimum performance level and as high as 235w depending on the choice of power BIOS. The 1660Ti is then shown to use ~125w in BF1 and (assuming Tom's tested the V56 performance on stock settings) Anand's BF1 test shows a 9FPS lead (11%) over the 1660Ti. I'll trade that 11% performance for 40% less (absolute scale) power usage any day - My PSU ain't getting any younger and "lol just buy another one" is dumb advice dumb people make.
Happy now?
GreenReaper - Friday, February 22, 2019 - link
Your point is a lie, though, as you clearly didn't buy it on his recommendation. How can we believe anything you say after that?Questor - Wednesday, March 6, 2019 - link
Not criticizing, simply adding:Several times in the past, honest review sites did comparisons of electrical costs in several places around the States and a few other countries with regard to brand A video card at a lower power draw than brand B video card. The idea was to calculate a reasonable overall cost for the extra power draw and if it was worth worrying about/worth specifically buying the lower draw card. In each case it was negligible in terms of addition power use by dollar (or whatever currency). A lot of these great sites have died out or been bought out and are gone now. It a darned shame. We used to actually real useful information about products and what all these values actually mean to the user/customer/consumer. We used to see the same for power supplies too. I haven't seen anything like that in years now. Too bad. It proved how little a lot of the numbers mattered in real life to real bill paying consumers.
Icehawk - Friday, February 22, 2019 - link
Man this sucks, clearly this card isn't enough for 4k and I'm not willing to spend on a RTX 2070. Can I hope for a GTX 1170 at like $399? 8gb of RAM please. I'm not buying a new card until it's $400 or less and has 8gb+, my 970 runs 1440p maxed or close to it in almost all AAA games and even 4k in some (like Overwatch) so I'm not going for a small improvement - after 2 gens I should be looking at close to double the performance but it sure doesn't look like that's happening currently.eva02langley - Friday, February 22, 2019 - link
Navi is your only hope.CiccioB - Friday, February 22, 2019 - link
And I think he will be even more disappointed if he's looking for a 4K card that is able to play with <b>modern</b> games.BTW: No 1170 will be made. This card is the top Turing without RT+TC and so it's the best performance you can get at lowest the price. Other Turing with no RT+TC will be slower (though probably cheaper, but you are not looking for just a cheap card, you are looking for a x2 the performace of your actual one).
catavalon21 - Sunday, February 24, 2019 - link
I am curious, what are you basing "no 1170" on?CiccioB - Monday, February 25, 2019 - link
Huh, let's see...designing a new chip costs a lot of money, especially when it is not that tiny.
A chip bigger than this TU116 will be just faster than the 2060, which has a 445mm^2 die size which has to be sold with some margins (unlike AMD that sells Vega GPU+HBM at the price of bread slices and at the end of the quarter reports gains in the amount of the fractions number of nvidia, but that's good for AMD fans, it is good that the company looses money to make them happy with oversized and HW that performs like mainstream competition one).
So creating a 1170 simply means killing the 2060 (and probably 2070), just defeating the original purpose of these cards as first lower HW (possible mainstream) capable of RT.
Unless you are supposing nvidia is going to scrap completely their idea that RT is the future and it's support will be expanded in future generations, there's no a valid, rationale reason for them to create a new GPU that will replace the cut version of TU106.
All this without considering that AMD is probably not going to compete on 7nm as with that PP they will probably manage to reach Pascal performance while at 7nm nvidia is going to blow any AMD solution away under the point of view of absolute performance, performance per W and performance per mm^2 (despite the addition of the new computational units that will find more and more usage in the future.. none still has thought of using tensor core for advanced AI, for example).
So, no, there will be no a 1170 unless it will be a further cut of TU106 that at the end will perform just like TU116 but will be just a mere recycle of broken silicon.
Now, let me hear what makes you believe that a 1170 will be created.
catavalon21 - Tuesday, February 26, 2019 - link
I do not know if they will create an 1170 or not; to be fair, I am surprised they even created the 1160. You have a very good point, upon reflection, it is quite likely such a product would impact RTX sales. I was just curious what had you thinking that way.Thank you for the response.
Oxford Guy - Saturday, February 23, 2019 - link
Our only hope is capitalism.That's not going to happen, though.
Instead, we get duopoly/quasi-monopoly.
douglashowitzer - Friday, February 22, 2019 - link
Hey not sure if you're opposed to used GPUs... but you can get a used, overclocked, 3rd party GTX 1080 with 8GB vram on eBay for about $365-$400. In my opinion it's an amazing deal and I can tell you from experience that it would satisfy the performance jump that you're looking for. It's actually the exact situation I was in back in June of 2016 when I upgraded my 970 to a 1080. Being a proper geek, I maintained a spreadsheet of my benchmark performance improvements and the LOWEST improvement was an 80% gain. The highest was a 122% gain in Rise of the Tomb Raider (likely VRAM related but impressive nonetheless). Honestly I don't believe I've ever experienced a performance improvement that felt so "game changing" as when I went from my 970 to the 1080. Maybe waaay back when I upgraded my AMD 6950 to a GTX 670 :). If "used" doesn't turn you off, the upgrade of your dreams is waiting for you. Good luck to you!wintermute000 - Friday, February 22, 2019 - link
There is no way your 970 runs 1440p maxed in modern AAA games. Unless your definition of maxed includes frames well below 60 and settings well below ultra.I have a 1060 and it needs medium to medium-high to reliably hold 60FPS @ 1440p.
eddman - Friday, February 22, 2019 - link
$280 for ~40% on average better performance and still 6GB of memory? I already have a 6GB 1060. I suppose I have to wait for navi or 30 series before actually upgrading.Fallen Kell - Friday, February 22, 2019 - link
I guess you missed the part where their memory compression technology has increased performance another 20-33% over previous generation 10xx cards, negating the need to higher memory bandwidth and more space within the card. So, 6GB on this card is essentially like 8-9GB on the previous generation. That is what compression can do (as long as you can compress and decompress fast enough, which doesn't seem to be a problem for this hardware).eddman - Friday, February 22, 2019 - link
No, I didn't. Compression is not a replacement for physical memory, no matter what nvidia claims.eddman - Friday, February 22, 2019 - link
I'm not an expert on this topic, but they state compression is used as a mean to improve bandwidth, not memory space consumption.Someone more knowledgeable can clear this up, but to my understanding textures are compressed when moving from vram to gpu, and not when loading from hdd/ssd or system memory into vram.
Ryan Smith - Friday, February 22, 2019 - link
"I'm not an expert on this topic, but they state compression is used as a mean to improve bandwidth, not memory space consumption."You are correct.
atiradeonag - Friday, February 22, 2019 - link
Laughing at those who think they can get a $279 Vega56 right now: where's your card? where's the link?atiradeonag - Friday, February 22, 2019 - link
Posting a random "sale" being instantly OOS is the usual failed stunt that fanboys from a certain faction to argue for the price/perfOxford Guy - Saturday, February 23, 2019 - link
It's also Newegg par for the course.CiccioB - Friday, February 22, 2019 - link
At those all rejoicing that Vega56 is selling for a slice of bread.. that's the end that failing architecture do when they are a generation behind.Yes, nvidia cards are pricey, but that's because AMD solutions can stand up the competition with them even with expensive components like HBM and tons more of W to suck.
So stop laughing about how poor is this new card price/performance ratio, after few weeks it will have the ratio that the market is going to give it. What we have seen so far is that Vega appeal has gone under the ground level, and as for any new nvidia launch AMD can answer only with a price cut, close followed by a rebrand of something that is OCed (and pumped with even more W).
GCN was dead at its launch time. Let's really hope Navi is something new or we will have nvidia monopoly on the market for another 2 year period.
Fallen Kell - Friday, February 22, 2019 - link
I don't have any such hopes for Navi. The reason is that AMD is still competing for the console and part of that is maintaining backwards compatibility for the next generation of consoles with the current gen. This means keeping all the GCN architecture so that graphics optimizations coded into the existing games will still work correctly on the new consoles without the need for any work.GreenReaper - Friday, February 22, 2019 - link
Uh... I don't think that follows. Yes, it will be a bonus if older games work well on newer consoles without too much effort; but with the length of a console refresh cycle, one would expect a raw performance improvement sufficient to overcome most issues. But it's not as if when GCN took over from VLIW, older games stopped working; architecture change is hidden behind the APIs.Korguz - Friday, February 22, 2019 - link
Fallen Kell" The reason is that AMD is still competing for the console and part of that is maintaining backwards compatibility for the next generation of consoles with the current gen. "
prove it... show us some links that state this
Korguz - Friday, February 22, 2019 - link
CiccioB" GCN was dead at its launch time"
prove it.. links showing this, maybe .....
CiccioB - Saturday, February 23, 2019 - link
Aahahah.. prove it.9 years of discounted sell are not enough to show you that GCN simply started as half a generation old architecture to end as a generation obsolete one?
Yes, you may recall the only peak glory in GCN life, that is Hawaii, which was so discounted that made nvidia drop the price of their 780Ti. After that move AMD just brought up a fail after the other, starting with Fiji and it's monster BOM cost just to reach the much cheaper GM200 based cards and following with Polaris (yes, the once labeled "next AMD generation is going to dominate") and then again with Vega and Vega VII is not different.
What have you not understood that AMD has to use whatever technology available to get a performance near that of a 2080? What do you think will be the improvements nvidia will achieve once they will move to 7nm (or 7nm+)?
Today AMD is incapable to get to nvidia performances and they also lack their modern features. GCN at 7nm can be as fast a a 1080Ti that is 3 years older. Despite the AMD card still uses more power, which still shows how inefficient GCN architecture is.
That's why I hope Navi is a real improvement, or we will be left with nvidia monopoly as at 7nm it will really have more than generation of advantage, seen it will be much more efficient and still having new features that AMD will then add only in 3 or 4 years from now.
Korguz - Saturday, February 23, 2019 - link
links to what you stated ??? sounds a little like just your opinion, with no links....considering that AMD doesnt have the deep pockets that Nvidia has, or the part that amd has to find the funds to R&D BOTH gpus, AND gpus, while nvida can put all the funds that can into R&D, it seems to me that AMD has done ok for that that have had to to work with over the years, with Zen doing well in the CPU space, they might start to have a little more funds to put back into their products, and while i havent read much about Navi, i am also hopeful that it may give nvidia some competition, as we sure do need it...
CiccioB - Monday, February 25, 2019 - link
It seems you lack the basic intelligence to understand the facts that can be seen by anyone else.You just have hopes that are based on "your opinion", not facts.
"they might start to have a little more funds to put back into their products". Well, last quarter with Zen selling like never before they managed to have a 28 millions net income.
Yes, you read right. 28. Nvidia with all its problems got more than 500. Yes, you read right. About 20 times more.
The facts are 2 (this are numbers based on facts, not opinion, and you can create interpretations on facts, not your hopes of the future):
- AMD is selling Ryzen CPU at a discount like GPUs and boths have a 0.2% net margin
- The margins they have on CPU can't compensate the looses they have in the GPU market, and seen that they manage to make some money with console only when there are some spike of requests, I am asking you when and from what AMD will get new funds to pay something.
You see, it's not that really "a hope" believing that AMD is loosing money for every Vega they sell seen the cost of the BOM with respect to the costs of the competition. Negating this is a "hope it is not real", not a sensible way to ask for "facts".
You have to know at least basic facts before coming here to ask for links on basic things that everyone that knows this market already knows.
If you want to start looking less idiot than you do by asking constantly for "proves and links", just start reading quarter results and see what are the effects of the long period strategies both companies have achieved.
Then, if you have a minimum of technical competence (which I doubt) look at what AMD does with its mm^2 and Watts and what nvidia does.
Then come again to ask for "links" where I can tell you that AMD architecture is one generation behind (and will be probably left once nvidia will pass to 7nm.. unless Navi is not GCN).
Do you have other intelligent questions to pose?
Korguz - Wednesday, February 27, 2019 - link
right now.. your facts.. are also just your opinion, i would like to see where you get your facts, so i can see the same, thats why i asked for links...again.. AMD is fighting 2 fronts, CPUs AND gpus.. Nvida.. GPU ONLY, im not refuting your " facts " about how much each is earning...its obvious... but.. common sense says.. if one company has " 20 times " more income then the other.. then they are able to put more back into the business, then the other... that is why, for the NHL at least, they have have salary caps, to level the playing field so some teams, cant " buy " their way to winning... thats just common sense...
" AMD is selling Ryzen CPU at a discount like GPUs and boths have a 0.2% net margin "
and where did you read this.. again.. post a link so we can see the same info... where did you read how much it costs AMD to make losing money on each Vega GPU ?? again.. post links so we can see the SAME info you are...
" You have to know at least basic facts before coming here to ask for links on basic things that everyone that knows this market already knows. " and some would like to see where you get these " facts " that you keep talking about... thats basic knowledge.
" if you have a minimum of technical competence " actually.. i have quite a bit of technical competence, i build my own comps, even work on my own car when i am able to. " look at what AMD does with its mm^2 and Watts and what nvidia does. " that just shows nvidia's architecture is just more efficient, and needs less power for that it does...
lastly.. i have been civil and polite to you in my posts.. resorting to name calling and insults, does not prove your points, or make your supposed " facts " and more real. quite frankly.. resorting to name calling and insults, just shows how immature and childish you are.. that is a fact
CiccioB - Thursday, February 28, 2019 - link
Did you read last (and understood) AMD latest quarter results?
Have you seen the total cost of production and the relative net income?
Have you an idea of how the margin is calculated (yes, it takes into account the production costs that are reported in the quarter results)?
Have you understood half of what I have written when based on facts that AMD just made $28 of NET income last quarter there are 2 possible ways of seeing the cause of those pitiful numbers? One is that AMD is discounting every product (GPU and CPU) to a ridiculous margin, the other that Zen is sold at a profit while GPUs are not? Or you may try to hypnotize the third view, that they are selling Zen at a loss and GPU with a big margin. Anything is good, but at leas one of these is true. Take the one you prefer and then try to think which one requires less artificial hypothesis to be true (someone once said that the best solution is most often the most simple one).
That demonstrates that nvidia architecture is simply one generation head, as what changes from a generation to the other is usually the performance you can get from a certain die are which sucks a certain amount of energy and just is the reason because a x80 level card today costs >$1000. If you can get the same 1080Ti performance (and so I'm not considering the new features nvidia has added to Turing) by using a complete new PP node, more current and HBM just 3 years later, then you may arrive at the conclusion that something is not working correctly on what you have created.
So my statement that GCN was dead at launch (when a 7970 was on par with a GTX680 which was smaller and uses much less energy) just finds it perfect demonstration in Vega 20 where GCN is simply 3 years back with a BOM costing at least twice that of the 1080Ti (and still using more energy).
Now, if you can't understand the minimum basic facts and your hope is that their interpretation using a completely genuine and coherent thesis is wrong and requires "fact and links", than it is no my problem.
Continue to hope that what I wrote is totally rubbish and that you are the one with the right answer to this particular situation. However what I said is completely coherent with t e facts that we have been witnessing from GCN launch up to now.
Korguz - Friday, March 1, 2019 - link
" Have you seen the total cost of production and the relative net income? "no.. thats is why i asked you to post where you got this from, so i can see the same info, what part do YOU not understand ?
" based on facts that AMD just made $28 of NET income last quarter" because you continue to refuse to even mention where you get these supposed facts from, i think your facts, are false.
" One is that AMD is discounting every product (GPU and CPU) to a ridiculous margin"
and WHERE have you read this??? has any one else read this, and can verify it?? i sure dont remember reading anything about this, any where
what does being hypnotized have to do with anything ?? do you even know what hypnotize means ?
just in case, this is what it means :
to put in the hypnotic state.
to influence, control, or direct completely, as by personal charm, words, or domination: The speaker hypnotized the audience with his powerful personality.
again.. resorting to being insulting means you are just immature and childish....
look.. you either post links, or mention where you are getting your facts and info from, so i can also see the SAME facts and info with out having to spend hours looking.. so i can make the the same conclusions, or, admit, you CANT post where you get your facts and info from, because, they are just your opinion, and nothing else.. but i guess asking a child to mention where they get their facts and info from, so one can then see the same facts and info, is just to much to ask...
CiccioB - Tuesday, March 5, 2019 - link
Kid, as I said you lack basic intelligence to recognize when you are just arguing about nothing.The number I'm using are those published by AMD and nvidia in their quarter results.
Now, if you are asking me the links for those reports it means you don't have the minimum idea of what I'm talking about AND you cannot do a simple search with Google.
So I stand my "insults": you have not the intelligence to argue about this simple topic, so stop writing completely on this site that has much better readers than you and is not gaining anything by your presence.
Korguz - Tuesday, March 5, 2019 - link
ahh here are the insults and name calling... and you are calling me a kid ??i can, and have done a simple google search.. BUT, i would like to see the SAME info YOU are looking at as well, but again.. i guess that is just too much to ask of you, is it wrong to want to be able to compare the same facts as you are looking at ? i guess so.. cause you STILL refuse to post where you get your facts and info from, i sure dont have the time so spend who knows how long to do a simple google search...
by standing by our insults, just shows YOU are the child here.. NOT me, as only CHILDREN resort to insults and name calling...
as i said in my reply farther down :
you refuse post links, OR mention your sources, simply because YOU DONT HAVE ANY.. most of what you post.. is probably made up, or rumor, if AT posted things like you do, with no sources, you probably would be all over them asking for links, proof and the like... and by YOUR previous posts, all of your info is made up and false..
maybe YOU should stop posting your info and facts from rumor sites, and learn to talk with some intelligence your self.
Qasar - Tuesday, March 5, 2019 - link
sorry CiccioB, but i agree with Korguz.. i have tried to find the " facts " on some of the things you have posted in this thread.. and i cant find them.. i also would like to know where you get your " opinions " from.CiccioB - Wednesday, March 6, 2019 - link
Quarter results!It's not really difficult to find them..
Try "AMD quarter results" and then "nvidia quarter results" in Google search engine and.. voilà, les jeux sont faits. Two clicks and you can read them. Back some years if you want,so you can have a history of what has happened during the last years apart the useless comments by fanboys you find on forums.
Now, if you have further problems at understanding all those tables and numbers or you do not know what is a gross margin vs a net income, then, you can't come here and argue I have no facts. It's you that can't understand publicly available data.
So if you want already chewed numbers that someone has interpreted for you, you can read them here:
https://www.anandtech.com/show/13917/amd-earnings-...
I wonder what you were looking for for not finding those numbers that have been commented by every site that is about technology.
@Korguz
You a re definitely a kid. You surely do not scare me with all those nonsense you write when the solution for YOUR problem (not mine) was simply to read more and write less.
Korguz - Thursday, March 7, 2019 - link
CiccioB you are hilarious !!!did you look up the word hypnotize, to see what it means, and how it even relates to this ? as, and i quote YOU " Or you may try to hypnotize the third view " what does that even mean ??
i KNEW the ONLY link you would mention.. is the EASY one to find.
BUT... what about all of these LIES :
" GCN was dead at its launch time"
" 9 years of discounted sell are not enough to show you that GCN simply started as half a generation old architecture to end as a generation obsolete one? "
" that is Hawaii, which was so discounted "
" starting with Fiji and it's monster BOM cost "
" AMD is selling Ryzen CPU at a discount like GPUs and both have a 0.2% net margin "
" One is that AMD is discounting every product (GPU and CPU) to a ridiculous margin "
" a "panic plan" that required about 3 years to create the chips. 3 years ago they already know that they would have panicked at the RTX cards launch and so they made the RT-less chip as well "
i did the simple google search for the above comments from you, as well as variations.. and guess what.. NOTHING comes up. THESE are the links i would like you to provide, as i cant find any of these LIES . so " It's you that can't understand publicly available data. " the above quotes, are not publicly available data, even your sacred " simple google search " cant find them.
lastly.. your insults and name calling ( and the fact that you stand by them ).. the only people i hear things like this from.. are from my friends and coworkers TEENAGE CHILDREN. NOT adults.. adults don't resort to things like this, at least the ones that i know... this alone.. shows how immature and childish you really are... i am pretty sure.. you WILL never post links to 98% of the LIES, RUMORS, or your personal speculation, and opinions, because of the simple fact, you just CAN'T, as your sources for all this... simply doesn't exist.
when you are able to reply to people with out having to resort to name calling and insults, then maybe you might be taken seriously. till then... you are nothing but a lying, immature child, who needs to grow up, and learn how to talk to other people in a respectful manner... maybe YOU should take your OWN advice, and simply read more and write less. Have a good day.
D. Lister - Saturday, February 23, 2019 - link
@CiccioBNavi will still be GCN unfortunately.
CiccioB - Monday, February 25, 2019 - link
If so don't cry if price will remain high (if not higher) for the next 3 years.We already know why.
Simplex - Friday, February 22, 2019 - link
"EVGA Precision remains some of the best overclocking software on the market."Better than MSI Afterburner?
Ryan Smith - Friday, February 22, 2019 - link
I consider both of them to be in the same tier, for what it's worth.Rudde - Friday, February 22, 2019 - link
In what way does 12nm FFN improve over 16nm? The transistor density is roughly the same, the frequencies see little to no improvement and the power-efficiency has only seen small improvements. Worse yet, the space used per SM has gotten worse. I do know that Turing brings architectural improvements, but are they at the cost of die space? Seems odd that Nvidia wouldn't care about die area when their flagships are huge chips that would benefit from a more dense architecture.Or could it be that Turing adds some kind of (sparse) logic that they haven't mentioned?
Rudde - Friday, February 22, 2019 - link
Never mind, the second page explains this well. (Parallell execution of fp16, fp32 and int32)CiccioB - Saturday, February 23, 2019 - link
Not only that.With Turing you also get mesh shading and a better support for thread switching, which is a awful technique used on GCN to improve its terrible efficiency, having lots of "bubbles" in the pipelines.
That's the reason you see previous AMD optimized games that didn't run too well with Pascal work much better with Turing, as the high threaded technique (the famous AC which is a bit overused in engines created for the console HW) is not going to constantly stall the SM with useless work as that of frequent task switching.
AciMars - Saturday, February 23, 2019 - link
“Worse yet, the space used per SM has gotten worse“. not true.. you know, turing have separate cuda cores for int and fp. It means when turing have 1536 cuda cores means 1536 int + 1536 fp cores. So on die size actually turing have 2x cuda cores compare to pascalCiccioB - Monday, February 25, 2019 - link
Not exactly, the number of CUDA core are the same, just that a new independent ALU as been added.A CUDA core is not only an execution unit, it also registers, memory (cache), buses (memory access) and other special execution units (load/store).
By adding a new integer ALU you don't automatically get double the capacity as really doubling the number of a complete CUDA core.
ballsystemlord - Friday, February 22, 2019 - link
Here are some spelling and grammar corrections.This has proven to be one of NVIDIA's bigger advantages over AMD, an continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need.
Missing d as in "and":
This has proven to be one of NVIDIA's bigger advantages over AMD, and continues to allow them to get away with less memory bandwidth than we'd otherwise expect some of their GPUs to need.
so we've only seen a handful of games implement (such as Wolfenstein II) implement it thus far.
Double implement, 1 befor ()s and 1 after:
so we've only seen a handful of games (such as Wolfenstein II) implement it thus far.
For our games, these results is actually the closest the RX 590 can get to the GTX 1660 Ti,
Use "are" not "is":
For our games, these results are actually the closest the RX 590 can get to the GTX 1660 Ti,
This test offers a slew of additional tests - many of which use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.
Missing "we" (I suspect that the sentence should be reconstructed without the "-"s, but I'm not that good.):
This test offers a slew of additional tests - many of which we use behind the scenes or in our earlier architectural analysis - but for now we'll stick to simple pixel and texel fillrates.
"Looking at temperatures, there are no big surprises here. EVGA seems to have tuned their card for cooling, and as a result the large, 2.75-slot card reports some of the lowest numbers in our charts, including a 67C under FurMark when the card is capped at the reference spec GTX 1660 Ti's 120W limit."
I think this could be clarified as their are 2 EVGA cards in the charts and the one at 67C is not explicitly labeled as EVGA.
Thanks
Ryan Smith - Saturday, February 23, 2019 - link
Thanks!boozed - Friday, February 22, 2019 - link
The model numbers have become quite confusingYojimbo - Saturday, February 23, 2019 - link
I don't think they are confusing, 16 is between 10 and 20, plus the RTX is extra differentiation. In fact if NVIDIA had some cards in the 20 series with RTX capability and some cards in 20 series without RTX capability, even if some were 'GTX' and some were 'RTX', then that would be far more confusing. Putting the non-RTX Turing cards in their own series is a way of avoiding confusion. But if they actually come out with an "1180" as say some rumors floating around, that would be very confusing.haukionkannel - Saturday, February 23, 2019 - link
Interesting to see the next year.Rtx 3050 and gtx 2650ti for the weaker version, if we get one new card rtx family... Hmm... that could work if They keep the naming. 2021 RTX3040 and gtx 2640ti...
CiccioB - Thursday, February 28, 2019 - link
Next generation all cards will have enough RT and tensor core enabled.Oxford Guy - Saturday, February 23, 2019 - link
"The NVIDIA GeForce GTX 1660 Ti Review, Feat. EVGA XC GAMING: Turing Sheds RTX for the Mainstream Market"The same idea, restated:
"NVIDIA Admits, With Its GeForce GTX 1660 Ti Turing, That RTX Isn't Ready For The Mainstream"
just6979 - Saturday, February 23, 2019 - link
Why disable all AMD or NVidia specific settings? Any using those cards would have those settings on... shouldn't the number reflect exactly what the cards are capable of when utilizing all the settings available. You wouldn't do a Turing Major review without giving some numbers for RTX ON in any benchmarks that supported it...CiccioB - Monday, February 25, 2019 - link
Yes, the test could be done with specific GPU features turned on, but you have to clearly say what are the advantage of each particular addition on the final image quality.Because you can have (optional) effects that cuts frame rate but increase the quality a lot. So looking only at the mere final number you may conclude that a GPU is better than another because it is faster (or just costs less), but in reality you are comparing two different kind of quality results.
It's not different than testing two cards with different detail settings (without stating which they are) and then trying to understand which is the better one only based on the frame rate results (which is the kind of results that everyone looks at).
jarf1n - Sunday, February 24, 2019 - link
load power consuption is wrong,if you want see only gpu measured,measured only gpu like techpowerup doing.its clear if you measure total load,its not show it right.
134W 1660ti
292W vega 56
source:techpowerup
its clear that gtx 1660 ti is much much better gpu for at least FHD and QHD also.
huge different.
CiccioB - Monday, February 25, 2019 - link
Well, that however does not tell the entire story.The ratios versus the total consumption of the system is also important.
Let's say that for a gaming PC you already have to use 1000W. A card that suck 100W more just wastes 10% more of your power. Meanwhile if your PC is using 100W, such a card will be doubling the consumption. As you see the card is always using 100W more, but the impact is different.
Let's make a different example: your PC uses about 150W in everyday use. You have to buy an SSD. There are some SSD that consumes twice the power of others for the same performances.
You may say that the difference is huge.
Well, an SSD consumes between 2 and 5W. Buying the less efficient (5W) is not really going to have an impact on the total consumption of your PC.
ilkhan - Sunday, February 24, 2019 - link
Coming from a GTX970 and playing on a 2560x1600 monitor, which card should I be looking at?Ryan Smith - Monday, February 25, 2019 - link
You'd likely want an RTX 2060, if not a bit higher with the RTX 2070.https://www.anandtech.com/bench/product/2148?vs=23...
Mad Maxine - Monday, February 25, 2019 - link
Price is still crap for the performance. We live in a age now that sees Hardware and software no longer growing. And a GPU from 2012 Can still run all modern games today. Market is not going to be huge for Overpriced GPUs that are really not that much of a improvement from 2012.Oxford Guy - Monday, February 25, 2019 - link
Telemetry is growing. You are "your" data.bhanavi - Tuesday, February 26, 2019 - link
Thanks you so much for the informationhttps://apkmabbu.com
https://apkmabbu.com/gbwhatsapp-apk/
https://apkmabbu.com/ac-market-apk/
https://apkmabbu.com/lucky-patcher-apk/
Psycho_McCrazy - Tuesday, February 26, 2019 - link
Given that 21:9 monitors are also making great inroads into the gamer's purchase lists, can benchmark resolutions also include 2560.1080p, 3440.1440p and (my wishlist) 3840.1600p benchies??eddman - Tuesday, February 26, 2019 - link
2560x1080, 3440x1440 and 3840x1600That's how you right it, and the "p" should not be used when stating the full resolution, since it's only supposed to be used for denoting video format resolution.
P.S. using 1080p, etc. for display resolutions isn't technically correct either, but it's too late for that.
Ginpo236 - Tuesday, February 26, 2019 - link
a 3-slot ITX-sized graphics card. What ITX case can support this? 0.bajs11 - Tuesday, February 26, 2019 - link
Why can't they just make a GTX 2080Ti with the same performance as RTX 2080Ti but without useless RT and dlss and charge something like 899 usd (still 100 bucks more than gtx 1080ti)?i bet it will sell like hotcakes or at least better than their overpriced RTX2080ti
peevee - Tuesday, February 26, 2019 - link
Do I understand correctly that this thing does not have PCIe4?CiccioB - Thursday, February 28, 2019 - link
No, they have not a PCIe4 bus.Do you think they should have?
Questor - Wednesday, February 27, 2019 - link
Why do I feel like this was a panic plan in an attempt to bandage the bleed from RTX failure? No support at launch and months later still abysmal support on a non-game changing and insanely expensive technology.I am not falling for it.
CiccioB - Thursday, February 28, 2019 - link
Yes, a "panic plan" that required about 3 years to create the chips.3 years ago they already know that they would have panicked at the RTX cards launch and so they made the RT-less chip as well. They didn't know that the RT could not be supported in performance with the low number of CUDA core low level cards have.
They didn't know that the concurrent would have played with the only weapon it was left to it to battle, that is prize as they could not think that the concurrent was not ready with a beefed up architecture capable of the sa functionalities.
So, yes, they panicked for sure. They were not prepared to anything of what is happening,
Korguz - Friday, March 1, 2019 - link
" that required about 3 years to create the chips.3 years ago they already know that they would have panicked at the RTX cards launch and so they made the RT-less chip as well. They didn't know that the RT could not be supported in performance with the low number of CUDA core low level cards have. "
and where did you read this ? you do understand, and realize... is IS possible to either disable, or remove parts of an IC with out having to spend " about 3 years " to create the product, right ? intel does it with their IGP in their cpus, amd did it back in the Phenom days with chips like the Phenom X4 and X3....
CiccioB - Tuesday, March 5, 2019 - link
So they created a TU116, a completely new die without RT and Tensor Core, to reduce the size of the die and lose about 15% of performance with respect to the 2060 all in 3 months because they panicked?You probably have no idea of what are the efforts to create a 280mm^2 new die.
Well, by this and your previous posts you don't have idea of what you are talking about at all.
Korguz - Tuesday, March 5, 2019 - link
and again.. WHERE do you get your info from ??they can remove parts of IC's, or disable them, have you not been reading the articles on here about the disabled IGPs in intels cpus, but still charging the SAME price as the fully enabled ones ?
you refuse post links, OR mention your sources, simply because YOU DONT HAVE ANY.. IMO.. most of what you most.. is probable made up, or rumor, if AT posted things like you do, with no sources, you probably would be all over them asking for links, proof and the like... and by YOUR previous posts, all of your info is made up and false..
there is no point talking to a CHILD any more... when are you going to resort to name calling and insults again ?
Hrel - Friday, March 1, 2019 - link
Last page, I don't think comparing the 1660ti to the 1060 6Gb is appropriate, either the 3GB or 1050Ti. Comparing it to the 1060 makes it look like Nivdia isn't raising prices as much as they really are.I'm basically out of the GPU market unless and until pricing changes. Not that any good games have come out in the last few years, or are scheduled to. But I should be able to run 3 monitors at 1080p with 60fps minimum in any modern game for $200. Based on the numbers here, I don't think this $300 1660Ti could even do that, and we're already over the threshold by $100.
You are right about not caring about RTX. Basically the timing was just really bad for it, global economy is in contraction. Moore's law is dead, I guess that's why they're trying some other form of value add, but charging consumers isn't the way to do it. Labor participation rate is barely above 60%, over 1/3rd of the country is unemployed. Wages have stagnated for 70 years! We don't have any more to give!
crazyforsurprise - Wednesday, March 6, 2019 - link
<a href="https://www.crazyforsurprise.com/nvidia-gtx-1660-t... review </a>Questor - Wednesday, March 6, 2019 - link
Does anyone think EVGA could add just a bit more depth to that card? What is it? A 3 slot? At least 2.5. It's either a portable furnace or idiotic overkill.zazzn - Friday, April 19, 2019 - link
Why is PUBG never tested as one of the test games? It's notoriously badly optimized showing true raw performance?rothayato - Monday, August 5, 2019 - link
As a SFFPC (mITX) user, I'm enjoying the thicker, but shorter, card as it makes for easier packaging.Additionally, I'm enjoying the performance of a 1070 at reduced power consumption (20-30w) and therefore noise and heat! https://rottenhayato.com/_udata/gsnn/tenor-369.gif
bobhumplick - Tuesday, August 20, 2019 - link
if somebody hasnt upgraded in a while (9 series or older or 300 series or older for amd) then one of these cards is ok. not great but adequate. if you can last with what you have or if you would be happy with used or a refurb then go that route or wait for real nextgen(zotac has refurbed 1070 tis for 269 and they overclock to 1080 level performance).nvidia wanted to put these on 7nm or at least 10nm. 10nm isnt worth it in terms of performance and density (its more of a cell phone node) and 7nm needs EUV ot make large dies. its the waiting game. once EUV comes (if it does) the we will see a spurt of card gens coming quicker like they used to and then another slow down after about 5nm maybe