"How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market?"
My suggestion is they send their CEOs over to Intel to beg on their knees for access to their 14nm process. This is getting silly, GPUs shouldn't be 4 years behind CPUs on process node. Someone cut Intel a big fat check and get this done already.
It's not just about having access to the process technology and fab. The cost of actually designing and verifying an SoC at nodes past 28nm is approaching the breaking point for most markets, that's why companies aren't jumping on to them. I saw one estimate of 500 million for development of a 16/14nm device. You better have a pretty good lock on the market to spend that kind of money.
Seriously, nVidia's market cap is $10 billion dollars, they can spend a tiny fortune moving to 20nm and beyond...if they want too.
I just don't think they want to saturate their previous products with such leaps and bounds in performance while also absolutely destroying their competition.
Moving to a smaller process isn't out of nVidia's reach, I just don't think they have a competitive incentive to spend the money on it. They've already been accused of becoming a monopoly after purchasing 3Dfx, and it'd be painful if AMD/ATI exited the PC graphics market because nVidia's Maxwell's, being twice as efficient as GCN, were priced identically.
atm. it is out of reach to them. at least from a financial perspective. while it would be awesome to have maxwell designed for & produced on intel's 14nm process, intel doesn't even have the capacity to produce all of their own cpus... until fall 2015 (broadwell xeon-ep release)...
"it also marks the end of support for NVIDIA’s D3D10 GPUs: the 8, 9, 100, 200, and 300 series. Beginning with R343 these products are no longer supported in new driver branches and have been moved to legacy status." - This is it. The time has come to buy a new card to replace my GeForce 9800GT :)
Such a modern card - why bother :-) The 980 will finally replace my 8800 GTX. Now that's a genuinely old card!! Actually I mainly need to do the upgrade because the power bills are so ridiculous for the 8800 GTX! For pities sake the card only has one power profile (high power usage).
I've got an 8800 GTS 640MB still running in my mom's rig that's far more than what she'd ever need. Despite getting great performance from my MSI 660Ti OC 2GB Power Edition, it might be time to consider moving up the ladder since finding another identical card at a decent price for SLI likely wouldn't be worth the effort.
So, either I sell off this 660Ti, give it to her, or hold onto it for a HTPC build at some point down the line. Decision, decisions. :)
I'd hold on to it. Thats still a damn fine card. Honestly you could probably find a used one on ebay for a decent price and SLI it up.
IMO though id splurge for a 970 and call it a day. I've got dual 760's right now, first time i've done SLI in prob 10 years. And honestly, the headaches just arent worth it. Yeah, most games work, but some games will have weird graphical issues (BF4 near release was a big one, DOTA 2 doesnt seem to like it), others dont utilize it well, etc. I kind of wish id just have stuck with the single 760. Either way, my 2p
Yeah, I tried to buy a nice card at that time despite wanting something higher than a 660Ti. But, as my wallet was the one doing the dictating, it's what I ended up with and I've been very happy. My only concern with a used one is just that: it's USED. Electronics are one of those "no go" zones for me when it comes to buying second hand since you have no idea about the circumstances surrounding the device and seeing as it's a video card and not a Blu Ray player or something, I'd like to know how long it's run, it's it's been OC'd or not, and the like. I'd be fine with buying another one new but not for the prices I'm seeing that are right in line with a 970. That would be dumb.
In the end, I'll probably wait it out a bit more and decide. I'm good for now and will probably buy a new 144Hz monitor instead.
Psshhhhh.... I still have my 3dfx Voodoo SLI card. Granted its just sitting on my desk, but still!!!
In all seriousness though, my roommate, who is NOT a gamer, is still using an old 7800gt card i had laying around because the video card in his ancient computer decided to go out and he didnt feel like building a new one. Can't say i blame him, Core 2 quad's are juuust fine for browsing the web and such.
>> the power bills are so ridiculous for the 8800 GTX!
Sorry but this is ridiculous. Do the math.
Best info I can find is that your card is consuming 230w. Assuming you're paying 15¢/kWh, even gaming for 12 hours a day every day for a whole month will cost you $12.59. Doing the same with a gtx980 (165w) would cost you $9.03/month.
So you'd be paying maybe $580 to save $3.56 a month.
There is a major difference between market capitalization and available capital for investment. Market Cap is just a rote multiplication of the number of shares outstanding by the current share price. None of this is available for company use and is only an indirect measurement of how well a company is performing. Nvidia has $1.5 billion in cash and $2.5 billion in available treasury stock. Attempting to match Intel's process would put a significant dent into that with little indication it would justify the investment. Nvidia already took on a considerable chunk of debt going into this year as well, which would mean that future offerings would likely go for a higher cost of debt, making such an investment even harder to justify.
While Nvidia is blowing out AMD 3:1 on R&D and capacity, Intel is blowing both of them away, combined, by a wide margin. Intel is dropping $10 billion a year on R&D, which is a full $3 billion beyond the entire asset base of Nvidia. It's just not possible to close the gap right now.
I don't think you realize how many billion dollars you need to spend to open a 14 nm factory, not even counting R&D & yearly costs. It's humongous, there is a reason why there are so few foundries in the world.
Well, if the NVIDIA/AMD CEOs is blind enough and cannot see it coming, then intel are gonna manufacture their next integrated graphics on a 10 or 8 nm chip and though immature will be a tough competition to them in terms of power and efficiency and even weight.
remember currently pcs load integrated graphics as a must by intel and people add third party graphics only 'cause intels is not good enough literally adding weight of two graphics cards (Intels and third partys) to the product. Its all worlds apart more convenient when integrated graphics outperforms or able to challenge third party GPUs, we would just throw away NVIDIA and guess what they wont remain a monopoly anymore rather completely wiped out
Besides Intels integrated graphics are getting more mature in terms of not just die size with every launch, just compare 4000s with 5000s, it wont be long before they catch up.
I have to agree that it is partly not about the verification cost breaking the bank. However, what I think is the more likely reason is that since the current node works, they will try to wring every penny out of that node. Look at the prices for the Titan Z. If this is not an attempt to fleece the "gotta have it buyer," I don't know what is.
This is the most likely thing to happen, as the transition to 14nm takes place for intel over the next 6 months those 22nm fabs will sit empty. They could sell capacity at a similar process to TSMC's latest while keeping their advantage at the same time.
I can see Nvidia switching to Intel's 14nm, however Intel charges a lot more than TSMC for it's foundry services (because they want to maintain their high margins). That would mean it's only economical for the high end cards
Nvidia would get twice as many GPUs per wafer on a 14nm process than 28nm. Maxwell at 14nm would blow Intel integrated and AMD out of the water in performance and power usage.
That simply isn't the reality. Samsung has better than 28nm processes also. This type of partnership would work well for Nvidia and AMD to partner with Samsung on their fabs. It makes more sense than Intel because Intel views Nvidia as a threat and competitor. There are reasons GPUs are still on 28nm and it is beyond process availability.
Unfortunately, that's not how it works. A 14nm process isn't simply a 28nm process scaled by 0.5; different parts are scaled differently, and so the overall die area savings aren't that simple to compute.
In a sense, the concept of a "14nm" process is almost a bit of a marketing term, since various components may still be much larger than 14nm. And of course, the same holds for TSMC's 28nm process... so a true comparison would require more knowledge that you or I have, I'm sure :-) - I'm not sure if intel even releases the precise technical details of how things are scaled in the first place.
no because intel is using their 22nm for haswell parts... the cpu transition ends in a year with the broadwell xeon-ep... at which point almost all the fabs will either be upgraded or upgrading to 14nm and the rest used to produce chipsets and other secondary die's
To Ryan Smith. How can the GTX 980 possibly have a 165W TDP when it actually consumes 8 watts more than the 195W TDP GTX 680 !? please explain ? did Nvidia just play games with the figures to make them look more impressive ?
You're right, power consumption and heat output are related. That's because they're one and the same! What else could that electricity be converted to? Light? A massive magnetic field? Mechanical energy? (The fan, slightly, but the transistors aren't going anywhere.)
no they aren't the same. Not all the electricity used is converted to heat. This is where the word EFFICIENCY comes into play. Yes it is related in a way but maxwell is more efficient with the electricity it draws using more of it and losing less of it to converted heat output. It's all in it's design.
bullshit. since a gpu doesn't do chemical nor mechanical transformations all the energy used is converted to heat (by way of moving electrons around). efficiency in a gpu means how much energy is used for a fixed set of calculations (for example: flops)
there is "work" being done, as transistors have to "flip" by use of electrons. Even if you don't believe that "input energy =\= output heat" think of it this way 100w incandescent bulb produces X amount of useful light 18w florescent bulb also produces X amount of useful light
in this sense the florescent bulb is much more efficient as it uses only 18w to produce the same light as the 100w incandescent. so if we say they produce the same amount of heat, then 100w florescent would produce ~5x the light of a 100w incandescent.
The power draw figures in this article are overall system power draw, not GPU power draw. Since the 980 offers significantly more performance than the 680, it's cranking out more frames, which causes the CPU to work harder to keep up. As as result, the CPU power draw increases, counteracting the benefits of lower GPU power draw.
I don't think that can explain the whole difference. It performs similarly to a 780 Ti in Crysis 3, so the difference in power consumption can only come from the card. The 980 is rated 85W less in TDP but consumes only 68W less at the wall. The discrepancy gets worse when you add losses in the power supply.
My guess is the TDP is rated at nominal clock rate, which is cheating a little because the card consistently runs much higher than nominal because of the boost.
Yes, but the 980's clock is significantly lowered for the FurMark test, down to 923MHz. The TDP should be fairly measured at speeds at which games actually run, 1150-1225MHz, because that is the amount of heat that we need to account for when cooling the system.
If that is the case, then the charts are misleading. GTX 680 has a 195W TDP vs. GTX 770's 230W (going by Wikipedia), but the 680 uses 10W more in the FurMark test.
I eagerly await your GTX 970 report. Other sites say that it barely saves 5W compared to the GTX 980, even after they correct for factory overclock. Or maybe power measurements at the wall aren't meant to be scrutinized so closely :)
To follow up: in your GTX 770 review from May 2013, you measured the 680 at 332W in FurMark, and the 770 at 383W in FurMark. Those numbers seem more plausible.
680 is a bit different because it's a GPU Boost 1.0 card. 2.0 included the hard TDP and did away with separate power targets. Actually what you'll see is that GTX 680 wants to draw 115% TDP with NVIDIA's current driver set under FurMark.
As stated in the article, the power figures are total system power draw. The GTX980 is throwing out nearly double the FPS of the GTX680, so this is causing the rest of the system (mostly the CPU) to work harder to feed the card. This in tun drives the total system power consumption up, despite the fact the GTX980 itself is drawing less power than the GTX680.
Because GTX980 makes so many more frames the CPU is worked a lot harder. The W in those charts are for the whole system so when the CPU uses more power it makes it harder to directly compare GPUs
The simple fact is that a GPU more powerful than a GTX 980 does not make sense right now, no matter how much we would love to see it. See, most folks are still gaming @ 1080, some of us are moving up to 1440. Under this scenarios, a GTX 980 is more than enough, even if quality settings are maxed out. Early reviews show that it can even handle 4K with moderate settings, and we should expect further performance gains as drivers improve. Maybe in a year or two, when 4K monitors become more relevant, a more powerful GPU would make sense. Now they simply don't. For the moment, nVidia's movement is smart and commendable: power efficiency! I mean, such a powerful card at only 165W! If you are crazy/wealthy enough to have two of them in SLI, you can cut your power demand by 170W, with following gains in temps and/or noise, and and less expensive PSU, if you're building from scratch. In the end, are these new cards great? Of course they are! Does it make sense to up-grade right now? Only if you running a 5xx or 6xx series card, or if your demands have increased dramatically (multi-monitor set-up, higher res. etc.).
A more powerful gpu does make sense. Some people like to play their games with triple monitors, or more. A single gpu that could play at 7680x1440 with all settings maxed out would be nice.
How many of us demand such power? The ones who really do can go SLI and OC the cards. nVidia would be spending billions for a card that would sell thousands. As I said: we would love the card, but still no sense Again, I would love to see it, but in the forseeable future, I won't need it. Happier with noise, power and heat efficiency.
Here's one that demands such power. I play 3600*1920 using 3 screens, almost 4k, 1/3 the budget, and still useful for, you know, working. Don't want sli/crossfire. Don't want a space heater either.
gaming at 1080@144 or 1080 with min fps of 120 for ulmb is no joke when it comes to gpu requirement. Most modern games max at 80-90fps on a OC'd gtx670 you need at least an OC'd gtx770-780. I'd recommend 780ti. and though a 24" 1080 might seem "small" you only have so much focus. You can't focus on periphery vision you'd have to move your eyes to focus on another piece of the screen. the 24"-27" size seems perfect so you don't have to move your eyes/head much or at all.
the next step is 1440@144 or min fps of 120 which requires more gpu than @ 4k60. as 1440 is about 2x 1080 you'd need a gpu 2x as powerful. so you can see why nvidia must put out a powerful card at a moderate price point. They need it for their 144hz gsync tech and 3dvision
imo the ppi race isn't as beneficial as higher refresh rate. For TVs manufacturers are playing this game of misinformation so consumers get the short end of the stick, but having a monitor running at 144hz is a world of difference compared to 60hz for me. you can tell just from the mouse cursor moving across the screen. As I age I realize every day that my eyes will never be as good as yesterday, and knowing that I'd take a 27" 1440p @ 144hz any day over a 28" 5k @ 60hz.
Well it all depends on viewing distance. I use a 30" 2560x1600 dell u3014 to game on currently since it's larger i can sit further away and still have just as good of an experience as a 24 or 27 thats closer. So you can't just say larger monitors mean you can;t focus on it all cause you can just at a further distance.
The power of the newest technology is and has always been an illusion because the creation of games will always be an exercise in "compromise". Even a game like WOW that isn't crippled by console consideration is created by the lowest common denominator demographic in the PC hardware population. In other words... ( if u buy it they will make it vs. if they make it I will upgrade ). Besides the unlimited reach of an openworld's "possible" textures and vtx counts. "Some" artists are of the opinion that more hardware power would result in a less aggressive graphic budget! ( when the time spent wrangling a synced normal mapped representation of a high resolution sculpt or tracking seam problems in lightmapped approximations of complex illumination with long bake times can take longer than simply using that original complexity ). The compromise can take more time then if we had hardware that could keep up with an artists imagination. In which case I gotta wonder about the imagination of the end user that really believes his hardware is the end to any graphics progress?
On desktop, all AMD needs to do is to lower price and perhaps release OC'd 290X to match 980 performance. It will reduce their margins, but they won't be irrelevant on the market, like in CPUs vs Intel (where AMD's most powerful beasts barely touch Intel's low-end, apart from some specific multi-threaded cases)
Why so simple? On desktop: - Performance is still #1 factor - if you offer more per your $, you win - Noise can be easily resolved via open air coolers - Power consumption is not such a big deal
So ... if AMD card at a given price is as fast as Maxwell, then they are clearly worse choice. But if they are faster?
In mobile, however, they are screwed big way, unless they have something REAL good in their sleeve (looking at Tonga, I do not think they do, as I am convinced AMD intends to pull off another HD5870 (i.e. be on the new process node first), but it apparently did not work this time around.)
The 290X already is effectively an overclocked 290 though. I'm not sure they'd be able to crank up power consumption reliably without running into heat dissipation or power draw limits.
Also, they'd have to invest in making a good reference cooler.
AMD will not beat 980 (they probably could put some fight, but nVidia could always defend it easily, so why do that - it would just dilute prices). What is more important for them, that *on desktop*, AMD can still stay relevant in lower price buckets by offering more performance per $ (while relying on partners for custom open-air cooling and ignoring the power draw disadvantage).
You do realize what you said pretty much exactly mirrors what people said about AMD and CPUs a few years back? Just trying to offer value while your competitor is making more efficient chips is a dead end where you're soon so far behind in technology that it's not enough. Nobody wants a 220W CPU (FX-9370/9590) and if AMD needs to pull a 300+W GPU to compete with GTX 980 it'll be equally dead on arrival.
Not really. When Core2 was released, pretty much entire AMD's lineup was made irrelevant (I still use my 7 years old mid-range Core2Duo and I know that AMD chips were not even for consideration back then). Now the fastest AMD's card is faster than 2nd fastest nVidia offering. Look at TR 2014 HW survey where 80% clearly enthusiasts buy stuff for less than $400. Die sizes are similar. Both companies are fabless and thus have access to the same processes (unlike competition with Intel).
AMD of course HAS TO come up with something better than what they have now. And soon. My point was mainly that they should be able to survive this holiday season sort of okayish.
I expect that AMD is focusing their limited resources on 20nm part, but it apparently did not work as well as it did in times of HD-5000 and 7000 series. And Maxwell improvements are greater than what is achievable just with die shrink. So there's some hard work for AMD ahead. Given necessary lead time for such products, I doubt 300-series will be good enough (unless they were going nuts with efficiency after seeing 680).
I admire nVidia for a long time always covering weak spots in their products. It could be seen from times when they went against 3dfx, though FX-5000 and now of course they show how they learned from 480 era.
BTW, before anyone asks: we're still working to get images and charts in. 4 days is very little time for a 20K word article. So please hold on for a bit.
As noted in the article, we had a problem with our 970 sample that was not able to be resolved in time for this article. Otherwise I would have very much liked to have a 970 in this review.
The boss quits and all you guys around running around the office with your shirts off screaming at the top of your lungs? The review could have waited and hour or two so that it was done, now I'm not even going to finish reading it.
They've been doing this since forever. If you look at the comments from the R9 290X launch review, people were complaining about the same thing for example.
Including me. It was unacceptable clIck-baiting then and it still is. Interestingly enough it's not a site-wide issue. Surface Pro 3 and Devils Canyon both had long waits for ultimately excellent reviews. iPhone 6 will no doubt be a very popular review and yet Joshua or whoever didn't push it online at midnight. For whatever reason though GPU reviews get this weird 'rush to publish, fill in content later' pattern.
@hpglow, in Ryan's defense, it was a short turnaround from the press briefing and this has happened in the past. Usually AT's articles focus heavily on the technical aspects also (which is greatly appreciated throughout the industry) and he also gets help from the rest of the staff to stitch the review together, so it is understandable that it is sometimes uploaded piecemeal.
I would rather have something that is eventually updated that stands the test of time, vs. something that is rushed out hastily.
You think that it would only take an hour or two to get a gpu somehow, run dozens of tests on it, put those tests into tables, put those tables onto pages, then write another few thousand words on those tests?
it's nice having one article with a full review, & it's nice to have early partial results... so in the future if publishing with missing content PLZ put in a big fat bold disclaimer: xyz content missing, update coming on 2.2.2222
@Ryan, thanks for the update, sorry I just scanned through and didn't see the subtext mentioning your issues with the 970. Looking forward to updated results once you get some good samples.
@nevertell, not sure if that comment was directed at me, but I never read through the entire article in the first sitting, especially in this case where I was actually in the market to buy one of these cards and might need to make a quick buying decision. I generally look at results and jump around a bit before going back to read the entire article, and I did not see any subtext on why the 970 wasn't included on this page about "Launching Today":
I expected to see something about why the 970 wasn't launching today, staggered launch, didn't get review sample etc but did not see anything, so I asked bc I saw Ryan was attending the comments here and might get a quick response.
Amazing card(s) Nvidia bought to market! I've already seen a couple of reviews showing this monster overclocking over 1450+. Just think about when Nvidia drops a big die version........ :)
AMD is by no means out of it. They're still very competitive in terms of performance, however they're far behind in terms of efficiency, which means to compete with the 980 they'll likely have to launch a far higher TDP card that requires more exotic cooling and will almost certainly be more expensive to manufacture. Even when you take the 285 into consideration, which offers 280 level performance at greatly reduced TDP, it's still at a higher TDP then the 980 which now outperforms the 290X by ~15%. And this isn't even taking noise, build quality, or features into consideration... Not a good position for AMD, in fact it's somewhat reminiscent of their processors (minus the competitive performance part).
"Just think about when Nvidia drops a big die version........ :)" Fortunately for AMD that's just not going to happen on 28nm, otherwise I might be inclined to agree with you. They still have a very real competitive chance with their upcoming cards.
O god really? 285 has greately reduced TDP? um 280 had a 200watt TDP, the 285 is 190, 10 watts less i wouldn't call that greatly reduced. Before you say 280 had 250watt tdp, no that is the 280x.
plenty sites i know of say its 200, so if there is that much misinfo then likely AMD at fault for that one. Seeing a lot of reviews put real world power usage around 20watts difference.
True, but the 285 didn't live up to the 180 watt claim. Later in the article they showed it saving only 13 watts under load when compared to the 280. So more like 237 watts?
On a per-watt scale, AMD's GPU's are now as inefficient as their CPU's when compared to the competition. It's good they got those console contracts, because they probably won't be getting the next round if this keeps up.
Absolutely amazing Maxwell is twice as efficient per watt as GCN 1.2
That seems to depend on the design reviewed. THG tested a similarly clocked card by a different manufacturer and there was a much larger gap between the 280 and 285 in terms of power consumption.
With that being said the 980 and 970 are both extremely fast and power efficient. Especially the 970 - if it really hits the market at around that pricing wow! Incredible value.
Strange that the 980 throttles so much at stock settings even outside of Furmark, first thing I'd do is go into the settings and fiddle a bit until it boosts consistently. But given its performance and it's not really a problem, and it can be remedied. Still, something to keep in mind especially when overclocking. I wonder how the 980 would have done with the beefier cooler from its higher-TDP predecessors, and some mild overvolting?
If you look in the gaming benchmarks the gpu is hitting 80C. Nvidia's design does not allow the gpu to exceed 80C so it has to lower frequencies to stay at 80C. This is the consequence of using the titan blower cooler but removing the vapor chamber lowering its cooling capability. That's why I don't get why all these people are rushing to buy the reference design gtx 980's as they are all sold out. They are throttling by hundreds of mhz because the titan blower cooler without a vapor chamber sucks. Custom cooling options are going to make the gtx 980 able to reliably hit 1300-1400 mhz some probably even 1500 mhz under full load and still stay under the 80C limit. Keep an eye out for MSI's twin frozr V design. It's going to have a beefy radiator with 2x 100mm fans in an open air design allowing WAY more cooling potential then the reference design. The twin frozr V design should allow the card to OC and actually keep those OC frequencies under heavy load unlike the reference card which cant even keep up with its stock setting under intense gaming. We should see a pretty big performance jump going to custom coolers and the reference performance is already staggering
Reviewers and "tech enthusiasts" alike jumped all over AMD when they didn't adequately cool their 290 cards. So while I don't disagree with what you're saying, I am just surprised that they would let it ship with such heavy throttling on ordinary games. Especially given that in this case it isn't because Nvidia shipped with a cooler that isn't sufficient - rather it's because by default the fan is running too slowly. Even without the vapor chamber, I bet it would be fine if they just turned up the fan just a hair. Not enough to make it loud, but enough to bring it in line with some of the other high-end cards here (under a load).
Anyway I suspect the vapor chamber will return in a higher-end "980 Ti" type configuration. In the meantime, yeah I'd keep an eye out for high-end aftermarket designs with a more aggressive power delivery system and wicked cooling. There's no doubt these chips have serious potential! I'd bet an aggressive 970 could hit the market for under $400 with 980-like performance and a factory warranty. :D
I'd say "poor AMD" but this kind of leapfrogging is nothing new. Even if AMD can't come out with something really impressive in the next several months, they can always remain competitive by dropping prices. My GPU is idle outside of gaming so the actual difference in power consumption in terms of dollars is tiny. Now, for number-crunching rigs that run their GPUs 24/7... that's a different story altogether. But then again, AMD's professional cards have good DP numbers so it's kind of a wash.
I'm very disappointed they got rid of the vapor chamber. I'm not a fan of the 3rd party coolers as they exhaust the air into the case (big deal for small form factor PCs). I prefer the blower cooler even though they are noisier, the loss of the vapor chamber is a big deal.
LOL people screaming at the 285. It actually consumes less power than the 980 and 970 not more. Nvidia greatly understated the TDP of the 980 and 970 to put it lightly. Both cards consume more power than the 250W TDP 7970 Ghz yet they're somehow rated at 165W and 145W how laughable ! http://i.imgur.com/nfueVP7.png
Misleading. If a card pumps out more frames (which the 980 most certainly does), it's going to drive up requirements for every other part of the system, AND it's going to obviously draw its maximum possible power. If you were to lock the framerate to a fixed value that all GPUs could reach the power savings would be more evident.
Also, TDP is the heat generation, as has been said earlier here, which is correlated but not equal to power draw. Heat is waste energy, so the less heat you put out the more energy you actually use to work. All this means is that (surprise surprise) the Maxwell 2 cards are a lot more efficient than AMD's GCN.
"TDP is the heat generation, as has been said earlier here, which is correlated but not equal to power draw." The GPU is a system which consumes energy. Since the GPU does not use that energy to create mass (materialization) or chemical bonds (battery), where the energy goes is easily observed from the outside. 1) waste heat 2) moving air mass through the heatsink (fan) 3) signalling over connects (PCIe and monitor cable) 4) EM waves 5) degradation/burning out of card's components (GPU silicon damage, fan bearing wear etc.) And that's it. The 1) is very dominant compared to the rest. There's no "hidden" work being done by the card. It would be against the law of conservation of energy (which is still valid, as far as I know).
That's a misunderstanding of what TDP has to do with desktop cards. Now for mobile stuff, that's great. But the bottlenecks for "Maxwell 2" isn't in TDP, it's in clockspeeds. Meaning the efficiency argument is useless if the end user doesn't care.
Now, for certain fields the end user cares very much. Miners have apparently all moved onto ASIC stuff, but for other compute workloads any end user is going to choose NVIDIA currently, just to save on their electricity bill. For the consumer end user, TDP doesn't matter nearly as much unless you're really "Green" conscious or something. In that case AMD's 1 year old 290x competes on price for performance, and whatever AMD's update is it will do better.
It's hardly a death knell of AMD, not the best thing considering they were just outclassed for corporate type compute work. But for your typical consumer end user they aren't going to see any difference unless they're a fanboy one way or another, and why bother going after a strongly biased market like that?
While it's a fair argument that unless you're environmentally inclined the energy savings from lower TDP don't matter, I'd say a lot more people do care about reduced noise and heat. People generally might not care about saving $30 a year on their electricity bill, but why would you choose a hotter noisier component when there's no price or performance benefit to that choice.
AMD GPUs now mirror the CPU situation where you can get close to performance parity if you're willing to accept a fairly large (~100W) power increase. Without heavy price incentives it's hard to convince the consumer to tolerate what is jokingly termed the "space heater" or "wind turbine" inconvenience that the AMD product presents.
actually the gpu's from amd do not mirror the cpu situation at all. amd' fx 9xxx with the huge tdp and all gets so outperformed by even the i7-4790k on almost everything and the 8 core i7-5960x obliterates it in everything, the performance of it's cpu's are NOT close to intels performance even with 100 extra watts. At least with the GPU's the performance is close to nvidias even if the power usage is not.
TLDR amd's gpu situation does not mirror is cpu situation. cpu situation is far worse.
I as a consumer greatly care about the efficinecy and tdp and heat and noise not just the performance. I do not like hearing my PC. I switched to all noctua fans, all ssd storage, and platinum rated psu that only turns on its fan over 500 watts load. The only noise coming from my PC is my radeon 5870 card basically. So the fact this GPU is super quiet means no matter what amd does performance wise if it cant keep up noise wise they lose a sale with me as i'm sure many others.
And im not a fanboy of either company i chose the 5870 over the gtx 480 when nvidia botched that card and made it a loud hot behemoth. And i'll just as quickly ditch amd for nvidia for the same reason.
The situation we have here now is very reminiscent of AMD's CPU position shortly after Core 2 hit the market. Nvidia now has a product with better performance, better efficiency, better (still) drivers & features, and similar pricing and that puts AMD in a bad way. The 3xx series had better seriously wow, or AMD's GPU division is quickly going to see the same market erosion that happened after Core 2/iCore. Personally, i think this is a knockout blow. - soon to be former 7970 owner
Maxwell is truly amazing stuff. Great advances from Nvidia in virtually every aspect.
Not super thrilled about the 980 price at $550, the 970 price however is amazing at $329. I was going to go with the 980 but 2x970s seem more appealing. 970 is 13/16 SMXes but it retains the full 4GB, full 256-bit bus, full 64 ROPs. Hopefully there's a lot of 970s on the full 980 PCBs.
Jensen just confirmed the prices on the Live Stream.
they have efficiency advantage because because they use the best 28nm call hpm. they use high performance mobile process of course they are very efficient
From TSMCs website ( http://www.tsmc.com/english/dedicatedFoundry/techn... )"The 28nm High Performance Mobile Computing (HPM) provides high performance for mobile applications to address the need for applications requiring high speed. Such technology can provide the highest speed among 28nm technologies. With such higher performance coverage, 28HPM is ideal for many applications from networking, and high-end smartphone/ mobile consumer products."
It looks like the hpm process was designed for chips that would dissipate much less than the 150-200W this one does. I seriously doubt someone would use hpm for such high power chips. Also some body had the voltages for the gm204 chip; and the idle voltage was closer to the 0.85V of hp than the 0.9V of HPM
There's no indication they are using 28nm HPM, even the first Maxwell part (GM107) used 28nm HP and alluded to this amazing power/perf ratio we see today with GM104.
It is obvious Nvidia's convergence of mobile *design* (not process) fundamentals helped them as we saw with Kepler, and this will only be further beneficial with their mobile Maxwell designs.
Yep the 970 is amazing price:perf, Newegg has them at the $330 price up to $350 for some custom/OC versions.
I did end up going with a single 980 though. The difference in build quality is just too much and SLI with my new G-Sync monitor (Swift) have had issues with my current 670 SLI build. The scaling with SLI is also not exceptional with these Maxwell cards (~60%), so the improvement of 2x970 is actually not that much over a single highly overclocked 980.
Still amazing job by Nvidia, the 980 would have been a grand slam at $500 but it is still an Earl Weaver 3-run blast at $550.
I'm going to wait for the custom gtx 980's. It was already throttling from reaching the 80C limit on most games. Blower design wouldn't of throttled if they left the vapor chamber in but they didnt. My case has plenty of airflow so i don't require a blower design. MSI twin frozr V open air design will cool the gpu much better and stop it from throttling during gaming. People rushing to buy the reference design are missing out on 100's of mhz due to thermal throttle.
Yep the open-faced custom coolers are definitely better at OC'ing, especially in single-GPU configs, but the problems I have with them are:
1) they tend to have cheaper build quality than the ref, especially the NVTTM cooler which is just classy stuff. The custom coolers replace this with lots and lots of plastic, visible heatpipes, cheapo looking fans. If I wanted an Arctic Accelero on my GPUs I would just buy one.
2) they usually take longer to come to market. Frequently +3-6 weeks lead time. I know its not a super long time in the grand scheme of things, but I'd rather upgrade sooner.
3) The blowers tend to do better in SLI over longer periods of time, and also don't impact your CPU temps/OC as much. I have a ton of airflow too (HAF-X) but I still prefer most of the heat being expelled from the start, and not through my H100i rad.
4) Frankly I'm not too worried about squeezing the last 100-150MHz out of these chips. There was a time I might have been, but I tend to stick it to a safe OC about 100-150MHz below what most people are getting and then call it a day without having to do a dozen 3DMark loops to verify stability.
Did you see the benchmarks. Some games were running in the 900's some in the 1000's some in 1100's. Stuck at these frequencies because the card was riding the 80C limit. As the review mentioned these aren't the same titan coolers as they removed the vapor chamber and replaced it with regular heatpipes. Getting a custom cooled card isnt about squeezing the last 100-150 from an OC its about squeezing an extra 400-600 mhz from an OC as many reviewers have gotten the gtx 980 to OC to 1500mhz. We are talking a massive performance increase from getting the proper cooling bigger than even the r9 290x going from reference to custom and that was pretty big itself.
Even to get the card to reliably run at stock settings during intense gaming you need a custyom cooled card. The reference cooled card can't even reliably hit its stock clock under intense gaming because the blower cooler without vapor chamber sucks.
No, you can adjust the Nvidia fan and GPU temp settings to get sustained Boosts. There is a trade-off in terms of fan noise and/or operating temps, but it is easy to get close to the results of the custom coolers at the expense of fan noise. I personally set my fan curve differently because I think Nvidia's 80C target temp profile is a little bit too passive in how quickly it ramps up fanspeeds. I don't expect to have any problems at all maintaining rated Boost speed, and if I want to overclock, I fully understand the sacrifice will be more fan noise over the custom coolers, but the rest of the negatives regarding custom coolers makes the reference cooler more appealing to me.
The GTX 980 page on NVIDIA website seems to indicate HDMI 1.4 as it says 3840*2160 at 30 Hz over HDMI (it is mentioned as a foot note). Are you sure about it being HDMI 2.0 ?
Hi Ryan, great review! There will be the usual HTPC perspective? For example, did they fix the 23.976 refresh rate as Haswell does? I think it's important to know how these work as htpc cards. Regards
I think the definition of HTPC is beginning to change though, and while these may not yet fit into traditional HTPC (Brix and NUC seem to be filling this niche more), they are definitely right in the SteamBox/BattleBox category.
Honestly, SteamBox was the first thing that came to mind when I saw that 165W TDP on the GTX 980, we will be seeing a lot of GM204 variants in the upcoming years in SFF, LAN, SteamBox and gaming laptop form factors that is for sure.
Any info what HDMI 2.0 level and HDCP version this nVidia GPU has? It can only be between these ywo; HDMI 2.0 level B HDCP 2.2 or HDMI 2.0 level A HDCP 2.0.
If it was the former, gaming performance in 4K displays that use HDMI 2.0. If it was the latter, this card is DOA when Bluray 4K videos comes out.
Ideally, we consumers want a full HDMI 2.0 spec transceiver with 20+Gbps bandwidth AND HDCP 2.2. But these are not yet available and that is why we see these stop-gap measures that manufacturers are calling HDMI 2.0 A and B. This should *hopefully* be resolved by next year.
In the meantime, I'd also like to know what exactly Nvidia is offering in the 970 and 980 cards. Without HDCP 2.2 the compatibility of these cards with future TV and AV gear is questionable.
I would NEVER buy a PSU that isn't at least a somewhat halfway decent 80+ certified unit. Especially if you're going to then combine it with expensive components. At this point anything that falls under the category of "Cheap non 80+" is as likely to explode and catch fire as it is to run with complete safety. :P
"GM204 ends up weighing in at 5.2 billion transistors, with a die size of 398mm2. This compares to 3.54B transistors and a die size of 294mm2 for GK104, and 7.1B transistors for 551mm2 for GK110."
Does this mean there will be a 10 billion transistor GM210
I think you are right. Unless there is another huge delay big maxwell will not see the light of day till 20nm. At 28nm the die size would be astronomical and thats too expensive.
Why is it you guys are still using reference 290/290X cards for testing? Are they even being sold anymore?
Oh, and I am having trouble believing nVidias claimed power consumption. It's certainly lower than a 780ti or 290x but not the 90W that is claimed according to the graphs.
looked at a few review sites, and its least 90watts some sites its a lot more but one that is closest is still about 90. AMD has been known to under state the true draw of their cards and even their CPU's. It tends to be higher then they say.
Barely beating AMD's ancient R9 290X...doesn't look good for Nvidia's new generation considering AMD's new line is due soon at the high end. Yet they barely need it as their old line is close to competitive already!
14nm is the first process node since they started building IC's that costs more than the prior generation. It would appear that the days of continually dropping prices are gone if this remains true. People really fail to realize how significant this is and it's likely to have profound impacts on the industry for the foreseeable future. It's quite possible that process advances will slow down dramatically as a result of the higher costs because the producers are no longer guaranteed a lower priced product for the same input. Continually improving products at lower prices may be a thing of the past in the CPU space.
The universe is oddly self-correcting when it comes to those types of things. The hiccup in the process node seemingly comes at just the right time, to kick designer's asses into letting go of that mad dash for transistors and focus on intelligent and efficient design instead.
Vinny that depends. If you are gaming at 1080 res there is no point. At 1440 it would depend on the game but for the most part the 770 is still fine. I don't think you should waste your money but to each his own.
Well that settles it then. I'll take two 870s for SLI (Guru3D has benches for it). Now the question remains what the vendors will actually sell them for and what the various versions will go for (factory overclocked, etc.). On top of the question of how hot an item these cards will be. I remember trying to get a 680 for nearly two weeks from the Egg just hitting page refresh randomly and hoping to catch a moment they were back in stock.
Seeing as this is only a GM204 part, it's probably safe to say there will be a GM210 part in the higher wattage ranges. Probably some damn fool $1000 Titan card...
So basically the 980 is my 680 SLI setup, but on a single card, plus some new features. And it's quieter and uses less power... And then you can SLI it later. Awesome!!
Yup! Just like my original single 680 was equal to two 570s. This wait was worth skipping the 7-series over. I have a feeling there are going to be a whole lot of 670s and 680s flooding the used market starting Friday.
I see the 980 is rated at 165W while the 680 is rated at 195W, But in the test results show that the total system consumption using the 980 is 8W higher than the one using the 680. I understand there may be some variantion of the cpu power because the 980 pushes more fps and therefore there is more work for the cpu. Is it posible that the 680 is less efficient in using its resources and therefore not being able to hit max tdp?
On the Power, Temp, and Noise page, the chart labeled "GeForce GTX 980 Average Clockspeeds" shows a label reading "Battlefield 3" instead of "Battlefield 4". =)
What revision Display Ports are those, 1.2 or 1.3? I see that the 1.3s had a launch article on AT earlier this week, any chance they made it into Maxwell along with HDMI 2?
The R290X was branded "Loud" in Anandtech's review (comment subsequently removed) and while token gestures of approval were made in that article, Mr Smith did seem to go out of his way to under-emphasise the strong points of that card. Now we have the "Mighty Maxwell," review, and unsurprisingly Mr Smith is waxing lyrical about the virtues of his favourite company's flagship GPU. Any reader who has read Anandtech's reviews of the R290X and GTX980 must surely recognise the blatant bias in Nvidia's favour. Having been an avid reader of Anandtech's articles for many years, one hopes the recent departure of it's founding father will not precipitate an all time low in the quality of Anandtech's GPU coverage. I encourage Mr Smith to temper his bias in forthcoming Gpu reviews, if only to prevent an exodus of Anandtech's faithful to sites that are capable of providing a balanced perspective on the ongoing battle between Nvidia and AMD GPUs.
That's because everything AMD touches is a joke. ATI was at the top of their game, far ahead of NVidia on price:performance, then AMD buys them. Look where ATI's been at since. GCN is a crappy architecture. The Netburst of GPU's. Ridiculously high power consumption. The only corner market is has is FP64. If you actually game, GCN is a dud. The only reason Sony/Microsoft went with AMD GPU's was because it's the best (only) integrated CPU/GPU solution (something NVidia has no IP) other than Intel which is way too expensive for consoles and still not as good.
Until maxwell AMD's gpu's were handily outperfoming nvidia when it came to price/performance. GCN isn't a bad architecture, its just outdated compared to nvidia's brand new one. I'm sure AMD has architecture improvements coming down the road too. This happens all the time with nvidia and AMD where one of them leapfrogs the other in architecture, the cycle will continue. Right now nvidia is ahead with maxwell, but acting like AMD is doomed or saying that their architecture is the 'netburst of gpu's is silly.
AMD is going to have to perform a miracle to get their performance and energy use close to nvidia. AMD may keep up in performance but they are going to do that with a 300+ watt dual 8 pin required + liquid cooling required or triple slot required cause it's going to hog the power to keep up and surpass nvidia. Nvidia's solution is much more elegant and there is little hope for amd to match that
This isnt even counting gm210. When big maxwell drops at 20nm I just don't see how amd will be able to match it. Like I said it will take an engineering miracle from amd
There's a difference between bias and impression. At what point does what you call "tempering bias" become a source of bias in and of itself, because one cannot give one's impression? For instance, the removal of branding the R290X reference card "loud" (which it is). I encourage Mr. Smith neither to "temper his bias" nor to get involved in the "ongoing battle between Nvidia and AMD GPUs", and instead to report the facts and relay his honest impression of these facts. If he does this well, I think the majority of readers of this site will support him for attempting to give a real representation of the state of the products available, which is what they are presumably looking for.
Looks like these GTX 970 and 980 cards are shit when it comes to compute, especially double precision floating point operations. I don't game, so I don't care about FPS. I do more productive things with video cards though.
than maybe you should buy a workstation card and not a gaming card or at the very least a titan if you cant afford the super outrageously priced quadro's and tesla's
Then you simply aren't the target market for these types of cards. Like Lastop311 said...go bend over for a professional level workstation GPU that most people here care nothing about. :-/
I have to admit I am greatly impressed. So impressed i don't know if I will wait for big maxwell. For a 2560x1600 gamer a single gtx 980 runs everything incredibly well all while being quiet. Custom coolers are going to boost the performance even more considering it was throttling back hitting the 80C limit. MSI twin frozr V cooling should really unleash its full performance. I almost pulled the trigger on a gtx 780ti too. Now instead of spending 700 I spend 550 and get better in every aspect.
PS My current pc is a gulftown i7-980x + radeon 5870 and the gpu is forcing me to either run at a non native resolution or turn down graphics level of some games as I didn't have a 2560x1600 monitor when I first got the PC 5870 was fine for 1080 not so for 1600. GTX 980 will be a huuuuuuuuge performance boost. The cpu is still rly fast OC'd to 4.2ghz 6 cores/12 threads. Thought about goin for an x99 system and was going to until i saw that lovely 55" LG oled that can be found for 3000. So I'll just wait for skylake-e before i upgrade the cpu side.
Useless stuff.Today's cards are still kinda underpowered for 4K, while too overpowered for 1080p. You don't need anything more than a GTX770 / R9280X to play anything maxed out in 1080p.
You could try 1440p or 1600p instead of 2160p, that's the sweet spot for these cards probably. A nice increase in pixels from 1080p, looks really nice.
The performance this card provides at 1440p in impressive and this would be the card I would buy if I was on a 27 or 30` monitor. As for 4k these card are still to slow unless you Xfire of SLI.
Also impressed with the compute performance. I will still hold on to my 7970Ghz which is still faster in Sony Vegas pro but like the improvement on the NV side in this area.
I'm still playing at 1200p so there hasn't been any need to upgrade my GPU i'm hoping by that time these nodes issues will be resolved so I can get my 60-75% boost in gpu performance before I upgrade.
Haha, if you think the 970 is useless, it beats out the Radeons whilst being cheaper, an instant change in the market, you are insane.
If you think people care about 3840×2160, they don't. If you do, buy a Radeon 295X2 and be done with it. It can do it, but it's ropey at times, no doubt.
You're completely forgetting about the people that prefer frame rates over 60. There's plenty of room for improvement at 1080p to fully support the 120/144Hz market. 1080p over 120Hz is more important to me than 4K.
GPUs have been supplanted by FPGAs and ASICs for both Bitcoin (SHA-25) and Litecoin (scrypt). At this point I'm not convinced cryptocoin processing speeds are going to be relevant.
Maybe not relevant for professional bitcoin miners as such, but a good benchmark of cryptography / big integer number crunching capabilities where a fair amount of effort has gone into optimizing the software for the hardware.
I'm interested in 4K hardware, currently using 1600P Dell with a 780Ti OC and want to move up to 2160P. I'm hoping this 980 is fully compliant with HDMI 2.0!
It sounds silly, but the pictures of the restaurant in the Voxel discussion look exactly like Poncho Villa's in Redondo Beach, Cali. That place was my favorite Mexican restaurant when I lived in Cali, and I always enjoyed Sunday Brunch there.
I'm sorry, but I couldn't care less about power efficiency on an enthusiast GPU unit. The 780Ti was a 250W card and that is a great card because it performs well. It delivers results.
I have a desktop computer, a full ATX tower. Not a laptop. PSUs are cheap enough, it's even a question of that.
So please, stuff the power requirements of this GTX980. The fact is if it sucked 250W and was more powerful, then it would have been a better card.
Exactly. Not only that, the "real" money is in getting the cards in OEM systems which sell hundreds of thousands of units. And those are very power and cooling specific.
For desktop cards power consumption is meaningless to the 99% Price/Performance is much more important. if Card A uses 50w more under full load than card B, but performs around the same and is £50 cheaper to buy at 15 p per kwh cost for energy it would take 6666 hours of running to get your £50 back. Add to this if Card A produces more heat into the room, in winter months your heating system will use less energy, meanning it takes even longer to get your cash back.... tldr Wattage is only important in laptops and tablets and things that need batterys to run
At least in this case it appears the power efficiency allows for a decent overclock. So you can get more performance and heat up your room at the same time.
Of course I'm sure they're leaving some performance on the table for a refresh next year. Pascal is still a long way's off so they have to extend Maxwell's lifespan. Same deal as with Fermi and Kepler.
When I built my mATX current box one criteria was that it be silent, or nearly so while still being a full power rig (i7 OC'd & 670), and the limitation really is GPU draw - thankfully NVs had dropped by the 6xx series enough I was able to use a fanless PSU and get my machine dead silent. I am glad I don't need a tower box that sounds like a jet anymore :)
I would love to see them offer a high TDP, better cooled, option though for the uber users who won't care about costs, heat, sound and are just looking for the max performance to drive those 4k/surround setups.
I agree that power consumption in itself isn't so important to most consumer desktop users, as long as they don't require extra purchases to accommodate the cards. But since power consumption and noise seem to be directly related for GPUs, power efficiency is actually an important consideration for a fair number of consumer desktop users.
Yeah, but they're still limited by the 250W spec. So the only way to give us more and more powerful GPU's while staying within 250W is to increase efficiency.
Apparently, if I want to run anything under the sun in 1080p cranked to full at 60fps, I will need to get me one GTX 980 and a suitable system to run with it, and forget mid-ranged priced cards.
That should put an huge hole in my wallet.
Oh yes, the others can run stuff at 1080p, but you have to keep tweaking drivers, turning AA on, turning AA off, what a chore. And the milennar joke, yes it RUNS Crysis, at the resolution I'd like.
Didn't, by any chance, the card actually benefit of being fabricated at 28nm, by spreading its heat over a larger area? If the whole thing, hipothetically, just shrunk to 14nm, wouldn't all that 165W of power would be dissipated over a smaller area (1/4 area?), and this thing would hit the throttle and stay there?
Or by being made smaller, it would actually dissipate even less heat and still get faster?
I think that it depends on the process. If Dennard scaling were to be in effect, then it should dissipate proportionally less heat. But to my understanding, Dennard scaling has broken down somewhat in recent years, and so I think heat density could be a concern. However, I don't know if it would be accurate to say that the chip benefited from the 28nm process, since I think it was originally designed with the 20nm process in mind, and the problem with putting the chip on that process had to do with the cost and yields. So, presumably, the heat dissipation issues were already worked out for that process..?
The die size doesn't really matter for heat dissipation when the external heat sink is the same size; the thermal resistance from die to heat sink would be similar.
I would love to see these built on Intel's 14nm process or even the 22nm. I think both Nvidia and AMD aren't comfortable letting Intel look at their technology, despite NDAs and firewalls that would be a part of any such agreement.
Well, if one goes by Jen-Hsun Huang's (Nvidia's CEO) comments of a year or two ago, Nvidia would have liked Intel to manufacture their SOCs for them, but it seems Intel was unwilling. I don't see why they would be willing to have them manufacture SOCs and not GPUs being that at that time they must have already had the plan to put their desktop GPU technology into their SOCs, unless the one year delay between the parts makes a difference.
I'm truly impressed with this new line of GPUs. To be able to acheive this leap on efficiency using the same transistor feature size is a great incremental achievement. Bravo TSMC & Nvidia. I feel comfortable to think that we will soon get this amazing 980 performance level on game laptops once we scale technology to the 10nm process. Keep up the great work.
http://blogs.nvidia.com/blog/2014/09/19/maxwell-an... Did I miss it in the article or did you guys just purposely forget to mention NV claims it does DX12 too? see their own blog. Microsoft's DX12 demo runs on ...MAXWELL. Did I just miss the DX12 talk in the article? Every other review I've read mentions this (techpowerup, tomshardware, hardocp etc etc). Must be that AMD Center still having it's effect on your articles ;)
They were running a converted elemental demo (converted to dx12) and Fable Legends from MS. Yet curiously missing info from this site's review. No surprise I guess with only an AMD portal still :(
From the link above: "Part of McMullen’s presentation was the announcement of a broadly accessible early access program for developers wishing to target DX12. Microsoft will supply the developer with DX12, UE4-DX12 and the source for Epic’s Elemental demo ported to run on the DX12-based engine. In his talk, McMullen demonstrated Maxwell running Elemental at speed and flawlessly. As a development platform for this effort, NVIDIA’s GeForce GPUs and Maxwell in particular is a natural vehicle for DX12 development."
So maxwell is a dev platform for dx12, but you guys leave that little detail out so newbs will think it doesn't do it? Major discussion of dx11 stuff missing before, now up to 11.3 but no "oh and it runs all of dx12 btw".
One more comment on 980: If it's a reference launch how come other sites already have OC versions (IE, tomshardware has a Windforce OC 980, though stupidly as usual they downclocked it and the two OC/superclocked 970's they had to ref clocks...ROFL - like you'd buy an OC card and downclock them)? IT seems to be a launch of OC all around. Newegg even has them in stock (check EVGA OC version): http://www.newegg.com/Product/Product.aspx?Item=N8... And with a $10 rebate so only $559 and a $5 gift card also. "This model is factory overclocked to 1241 MHz Base Clock/1342 MHz Boost Clock (1126 MHz/1216 MHz for reference design)"
Who would buy ref for $10 diff? IN fact the ref cards are $569 at newegg, so you save buying the faster card...LOL.
TheJian, Wow, Did you read the article? Did you read the conclusion? AT says the 980 is "remarkable" , "well engineered", "impeccable design" and has "no competition" They covered almost all of Nvidia marketing talking points and you're going to accuse them of a conspiracy? Are you fking retarded??
It would be nice to rather than just talk about about the 750 Ti to also include it in comparisons to see it clearer in perspective what it means to go from Maxwell I to Maxwell II in terms of performance, power consumption, noise and (while we are at it) performance per Watt and performance per $.
Also where're the benchmarks for the GTX 970? I sure respect that this card is in a different ballpark but the somewhat reasonable power output might actually make the GTX 970 a viable candidate for an HTPC build. Is it also possible to use it with just one additional 6 Pin connector (since as you mentioned this would be within the specs without any overclocking) or does it absolutely need 2 of them?
As was noted in the review at least twice, they were having issues with the 970 and thus it won't be tested in full until next week (along with the 980 in SLI).
Wow! This makes me upgrade from a GTX660Ti - not because of gaming (my card is fast enough for my needs) but because of the power efficiency gains for GP-GPU (running GPU-Grid under BOINC). Thank you nVidia for this marvelous chip and fair prices!
i still CANT understand amd 'uber' option. its totally out of test,bcoz its just 'oc'd' button,nothing else. its must be just r290x and not anantech 'amd canter' way uber way.
and,i cant help that feeling,what is strong,that anatech is going badly amd company way,bcoz they have 'amd center own sector. so,its mean ppl cant read them review for nvidia vs radeon cards race without thinking something that anatech keep raden side way or another. and,its so clear thats it.
btw i hope anantech get clear that amd card R9200 series is just competition for nvidia 90 series,bcoz that every1 kow amd skippedd 8000 series and put R9 200 series for nvidia 700 series,but its should be 8000 series. so now,generation of gpu both side is even.
meaning that next amd r9 300 series or what it is coming amd company battle nvidia NEXT level gpu card,NOT 900 series.
Ryan, Is it possible to see the average clock speeds in different tests after increasing the power and temperature limit in afterburner?
And also once the review units for non-reference cards come in it would be very nice to see what the average clock speeds for different cards with and without increased power limit would be. That would be a great comparison for people deciding which card to buy.
Exceptional by NVIDIA; it's always good to see a more powerful yet more frugal card especially at the top end.
AMD's power consumption could be tackled - at least partly - by some re-engineering. Do they need a super-wide memory bus when NVIDIA are getting by with half the width and moderately faster RAM? Tonga has lossless delta colour compression which largely negates the need for a wide bus, although they did shoot themselves in the foot by not clocking the memory a little higher to anticipate situations where this may not help the 285 overcome the 280.
Perhaps AMD could divert some of their scant resources towards shoring up their D3D performance to calm down some of the criticism because it does seem like they're leaving performance on the table and perhaps making Mantle look better than it might be as a result.
What might be interesting is doing a comparison of video cards for a specific framerate target to (ideally, perhaps it wouldn't actually work like this?) standardize the CPU usage and thus CPU power usage across greatly differing cards. And then measure the power consumed by each card. In this way, couldn't you get a better example of
Whoops, hit tab twice and it somehow posted my comment. Continued:
couldn't you get a better example of the power efficiency for a particular card and then meaningful comparisons between different cards? I see lots of people mentioning how the 980 seems to be drawing far more watts than it's rated TDP (and I'd really like someone credible to come in and state how heat dissipated and energy consumed are related. I swear they're the exact same number as any energy consumed by transistors would, after everything, be released as heat, but many people disagree here in the comments and I'd like a final say). Nvidia can slap whatever TDP they want on it and it can be justified by some marketing mumbo jumbo. Intel uses their SDPs, Nvidia using a 165 watt TDP seems highly suspect. And please, please use a nonreference 290X in your reviews, at least for a comparison standpoint. Hasn't it been proven that having cooling that isn't garbage and runs the GPU closer to high 60s/low 70s can lower power consumption (due to leakage?) something on the order of 20+ watts with the 290X? Yes there's justification in using reference products but lets face it, the only people who buy reference 290s/290Xs were either launch buyers or people who don't know better (there's the blower argument but really, better case exhaust fans and nonreference cooling destroys that argument).
So basically I want to see real, meaningful comparisons of efficiencies for different cards at some specific framerate target to standardize CPU usage. Perhaps even monitoring CPU usage over the course of the test and reporting average, minimum, peak usage? Even using monitoring software to measure CPU power consumption in watts (as I'm fairly sure there are reasonably accurate ways of doing this already, as I know CoreTemp reports it as its probably just voltage*amperage, but correct me if I'm wrong) and reported again average, minimum, peak usage would be handy. It would be nice to see if Maxwell is really twice as energy efficient as GCN1.1 or if it's actually much closer. If it's much closer all these naysayers prophesizing AMD's doom are in for a rude awakening. I wouldn't put it past Nvidia to use marketing language to portray artificially low TDPs.
Don't confuse TDP with power consumption, they are not the same thing. TDP is for designing the thermal solution to maintain the chip temperature. If there is more headroom in the chip temperature, then the system can operate faster, consuming more power.
"Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE"
I just realized that the GTX 980 has a TDP of 165 watts, my Corsair CX430 watt PSU is almost overkill!, that's nuts. That's even enough room to give the whole system a very good stable overclock. Right now i have a pair of HD 7850's @ stock speed and a FX-8320 @ 4.5Ghz, good thing the Corsair puts out over 430 watts perfectly clean :)
While a good power supply, you are leaving yourself little headroom with 430W. I'm surprised you are getting away with it with two 7850s and not experiencing system crashes.
The 980 is an impressive feat of engineering. Fewer transistors, fewer compute units, less power and better performance... NVIDIA has done a good job here. I hope that AMD has some good improvements of its own under its sleeve.
One thing to remember is they probably save a -ton- of die area/transistors by giving it only what, 1/32 double precision rate? I wonder how competitive in terms of transistors/area an AMD GPU would be if they gutted double precision compute and went for a narrower, faster memory controller.
I am looking forward to your review of the GTX 970 once you have a compatible sample in hand. I would like to see the results of the Folding @Home benchmarks. It seems that this site is the only one that consistently use that benchmark in its reviews.
As a "Folder" I'd like to see any indication that the GTX 970, at a cost of $330 and drawing less watts than a GTX 780; may out produce both the 780 ($420 - $470) and the 780Ti ($600). I will be studying the Folding @ Home: Explicit, Single Precision chart which contains the test results of the GTX 970.
Wow, this is impressive stuff. 10% more performance from 2/3 the power? That'll be great for desktops, but of course even better for notebooks. Very impressed they could pulll off that kind of leap on the same process!
They've already managed to significantly bump up the top end mobile part from GTX 680 -> 880, but within a year or so I bet they can go quite a bit higher still.
Oh well, it was nice having a top of the line mobile GPU for a while LOL
If 28nm hit in 2012 though, doesn't that make 2015 its third year? At least 28nm seems to be a really good process, vs all the issues with 90/65nm, etc., since we're stuck on it so long.
Isn't this Moore's Law hitting the constraints of physical reality though? We're taking longer and longer to get to progressively smaller shrinks in die size, it seems like...
Oh well, 22nm's been great with Intel and 28's been great with everyone else!
Nvidia and AMD havent moved to 22 because they don't have the funding. Intel has tens of billions to blow away in R&D. Broadwells going to be released in 2015, and its 14nm.
In light of Nvidia not even trying to reduce manufacturing nodes It would be really nice to see them go on the offensive in the price war. $300 for the GTX980, everything lower from there. Probably not now, but like, spring 2015, that'd be great! Make good and sure to wipe out all the hold outs (like myself) keeping their old cards because they still play everything they play on 1080p. Kinda, get all your customers caught up on hardware in the same upgrade time frame.
Then when they finally do drop nodes they can focus on making every card they sell run games at 8K resolution.
Hate to break the news to you, but if you want to game at high level (above 1080p), you need to pay at high level. There is nothing new about that in the entire history of PC enthusiast building and gaming either for those of us who remember making the "huge leap" from a 15" 1024x768 resolution CRT monitor to a whopping 19" 1600x1200 CRT monitor. At least not in the 20 years since I've been involved with it anyway.
Besides all that, that's why GPU makers offer cards for different budgets. If you can't afford their top tier products, you can't afford to game top tier. Period and end of discussion.
It seems as though the big improvement nvidia has made is to enable cpu-level scheduling/dvfs granularity into their chip. However, once all cores are engaged it ends up using as much power as its predecessor (see tomshardware). What I really want to know is how much of this due to purely driver-level changes.
Exceptional design. The sad thing is that NVIDIA will take forever to release a 30 SMM Maxwell GPU and once it finally does, it would cost a ton; even later on when they release a "budget" version for an unreasonable price of around $650 it would be too late - the great performance potential of today wouldn't be so great tomorrow. Striving for and building amazing GPUs is the right way forward, not empowering the people with them is a crime. Whatever happened to $500 flagship products?
Just got a GTX970, and only latest Nvidia drivers will install for 9xx series cards it seems. Unfortunately the latest drivers totally screw up some programs that use CUDA, seem to hide its presence from programs lile Xillisoft Video Convertor Ultimate:-/ No response of course from either Nvidia or Xillisoft regarding the problem. Wonder how many other programs the drivers break?
Geeze. Anandtech, do an updated best value graphics card list because since the launch of the 970/980 retailers are giving some serious price cuts to 770/780/780 Ti's. Newegg has a 780 for less than $300 after rebate and just a hair over $300 before rebate. I'm seeing 780 Ti's for ~$430 and 770s for ~$240. I am amazed to see price cuts this deep since I haven't seen them the last several generations and considering how overpriced these cards were. But while supplies last and prices hold/drop, this completely flips price/performance on it's head. I feel bad recommending an AMD 290 Tri-X to a friend a couple months back now. xD
Newegg has an Asus DirectCU II GTX 780 selling in the $290 range after a mail in rebate, promo code and discount. It also comes with a pre-order copy of the new Borderlands game. That has to be the best value to performance GPU out right now. It is almost a full $100 less than the cheapest non-reference R9 290 on newegg and $40 less than the cheapest reference R9 290 which is crazy since this same Asus GTX 780 was selling for over $550 just last month with no free games (and still is on Amazon for some reason).
I feel bad for having just bought the 290 tri-x just a month ago! =( I bought it because you never know when the new cards will be released and how much they will cost. Unfortunately, the new cards came out too soon!
Yeah. To be honest nobody except ardent Nvidia fanboys would've believed Nvidia would release cards as performance and price competitive as they did, especially the 970. The 980 is honestly a little overpriced compared to a few generations ago as they'll slap a $200 premium on it for Big Maxwell but $330 MSRP for the 970 (if I remember correctly) wasn't bad at all, for generally what, 290/780/290X performance?
It's not too surprising as we saw what the 750ti was like. What is disappointing, though, is that I thought nvidia had made some fundamental breakthrough in their designs where, instead, it looks as though they "simply" enabled a better governor.
It'll be interesting to see how the efficiency suffers once nvidia releases a proper compute die with area dedicated to double precision FP. I have to keep in mind that when factoring in the stripped down die compared to AMD's 290/290X cards, the results aren't as competition. Lowing as they first seem. But if AMD can't counter these cards with their own stripped down gaming only cards then nvidia took the win this generation.
That's an excellent point. I take it you already read the tomshardware review? They're compute performance/W is still good, but not so unbelievable as their gaming performance, but I'm not sure it's b/c this is a gaming only card. Regardless, though, amd needs to offer something better than what's currently available. Unfortunately, I don't think they will be able to do it. There was a lot of driver work than went into making these maxwell cards hum
One thing that really bothers me though is how Anandtech keeps testing the 290/290X with reference cards. Those cards run at 95 C due to the fan control profile in the BIOS and I remember seeing that when people ran those cards with decent nonreference cooling in the 70 C range that power consumption was 15-20+ watts lower. So an AMD die that sacrifices FP64 performance to focus on FP32(gaming, some compute) performance as well as decreasing die size due to the lack of FP64 resources seems like it could be a lot more competitive with Maxwell than people are making it out to be. I have this feeling that the people saying how badly Maxwell trounces AMD's efficiency and that AMD can't possibly hope to catch up are too biased in their thinking.
Do you have a link to those reviews that show non-reference fans make gpus more efficient? I don't know how that could be possible. Given the temps we're looking at the effects on the conductors should be very, very small. Regarding the reduction in fp performance and gaming efficiency, that's a good point. That may indeed be part of the reason why nvidia has the gaming/compute split (aside from the prices they can charge).
Here's an example of a card with liquid cooling. Factor in the overclock that the nonreference card has and that it draws something like 20 watts less in Furmark and the same in 3Dmark. I could be mistaken on the improved power usage but I do recall seeing shortly after the 290X launch that nonreference coolers helped immensely, and power usage dropped as well. Sadly I don't believe Anandtech ever reviewed a nonreference 290X... which is mind boggling to consider, considering how much nonreference cooling helped that card, even outside of any potential power usage decreases.
I wonder why they still give these cards these boring numbered names like GTX 980. Except for the Titan, these names kinda suck. Why not at least name it the Maxwell 980 or for AMD's R( 290 series the Hawaii 290. That sounds a lot cooler than GTX or R9. Also, for the last several generations, AMD and Nvidia's numbering system seems to be similar up until AMD ended that with the R9/R7 200 series. Before that, they had the GTX 700 and HD 7000 series, the GTX 600 and HD 6000 series and so on. Then, as soon as AMD changed it up, Nvidia decides to skip the GTX 800's for retail desktop GPUs and jump right up to the 900 series. Maybe they will come up with a fancier name for their next gen cards besides the GTX 1000's.
I love this card. I have an SLI config and OC'd (very stable) to 1518Mhz Boost and an 8Ghz Memory Clock with EVGA's ACX 2.0 cooler! Incredible how well this card OC's. For the price to performance ratio you get, this is a steal!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
274 Comments
Back to Article
Sttm - Thursday, September 18, 2014 - link
"How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market?"My suggestion is they send their CEOs over to Intel to beg on their knees for access to their 14nm process. This is getting silly, GPUs shouldn't be 4 years behind CPUs on process node. Someone cut Intel a big fat check and get this done already.
joepaxxx - Thursday, September 18, 2014 - link
It's not just about having access to the process technology and fab. The cost of actually designing and verifying an SoC at nodes past 28nm is approaching the breaking point for most markets, that's why companies aren't jumping on to them. I saw one estimate of 500 million for development of a 16/14nm device. You better have a pretty good lock on the market to spend that kind of money.extide - Friday, September 19, 2014 - link
Yeah, but the GPU market is not one of those markets where the verification cost will break the bank, dude.Samus - Friday, September 19, 2014 - link
Seriously, nVidia's market cap is $10 billion dollars, they can spend a tiny fortune moving to 20nm and beyond...if they want too.I just don't think they want to saturate their previous products with such leaps and bounds in performance while also absolutely destroying their competition.
Moving to a smaller process isn't out of nVidia's reach, I just don't think they have a competitive incentive to spend the money on it. They've already been accused of becoming a monopoly after purchasing 3Dfx, and it'd be painful if AMD/ATI exited the PC graphics market because nVidia's Maxwell's, being twice as efficient as GCN, were priced identically.
bernstein - Friday, September 19, 2014 - link
atm. it is out of reach to them. at least from a financial perspective.while it would be awesome to have maxwell designed for & produced on intel's 14nm process, intel doesn't even have the capacity to produce all of their own cpus... until fall 2015 (broadwell xeon-ep release)...
kron123456789 - Friday, September 19, 2014 - link
"it also marks the end of support for NVIDIA’s D3D10 GPUs: the 8, 9, 100, 200, and 300 series. Beginning with R343 these products are no longer supported in new driver branches and have been moved to legacy status." - This is it. The time has come to buy a new card to replace my GeForce 9800GT :)bobwya - Friday, September 19, 2014 - link
Such a modern card - why bother :-) The 980 will finally replace my 8800 GTX. Now that's a genuinely old card!!Actually I mainly need to do the upgrade because the power bills are so ridiculous for the 8800 GTX! For pities sake the card only has one power profile (high power usage).
djscrew - Friday, September 19, 2014 - link
Like +1kron123456789 - Saturday, September 20, 2014 - link
Oh yeah, modern :) It's only 6 years old) But it can handle even Tomb Raider at 1080p with 30-40fps at medium settings :)SkyBill40 - Saturday, September 20, 2014 - link
I've got an 8800 GTS 640MB still running in my mom's rig that's far more than what she'd ever need. Despite getting great performance from my MSI 660Ti OC 2GB Power Edition, it might be time to consider moving up the ladder since finding another identical card at a decent price for SLI likely wouldn't be worth the effort.So, either I sell off this 660Ti, give it to her, or hold onto it for a HTPC build at some point down the line. Decision, decisions. :)
Kutark - Sunday, September 21, 2014 - link
I'd hold on to it. Thats still a damn fine card. Honestly you could probably find a used one on ebay for a decent price and SLI it up.IMO though id splurge for a 970 and call it a day. I've got dual 760's right now, first time i've done SLI in prob 10 years. And honestly, the headaches just arent worth it. Yeah, most games work, but some games will have weird graphical issues (BF4 near release was a big one, DOTA 2 doesnt seem to like it), others dont utilize it well, etc. I kind of wish id just have stuck with the single 760. Either way, my 2p
SkyBill40 - Wednesday, September 24, 2014 - link
@ Kutark:Yeah, I tried to buy a nice card at that time despite wanting something higher than a 660Ti. But, as my wallet was the one doing the dictating, it's what I ended up with and I've been very happy. My only concern with a used one is just that: it's USED. Electronics are one of those "no go" zones for me when it comes to buying second hand since you have no idea about the circumstances surrounding the device and seeing as it's a video card and not a Blu Ray player or something, I'd like to know how long it's run, it's it's been OC'd or not, and the like. I'd be fine with buying another one new but not for the prices I'm seeing that are right in line with a 970. That would be dumb.
In the end, I'll probably wait it out a bit more and decide. I'm good for now and will probably buy a new 144Hz monitor instead.
Kutark - Sunday, September 21, 2014 - link
Psshhhhh.... I still have my 3dfx Voodoo SLI card. Granted its just sitting on my desk, but still!!!In all seriousness though, my roommate, who is NOT a gamer, is still using an old 7800gt card i had laying around because the video card in his ancient computer decided to go out and he didnt feel like building a new one. Can't say i blame him, Core 2 quad's are juuust fine for browsing the web and such.
Kutark - Sunday, September 21, 2014 - link
Voodoo 2, i meant, realized i didnt type the 2.justniz - Tuesday, December 9, 2014 - link
>> the power bills are so ridiculous for the 8800 GTX!Sorry but this is ridiculous. Do the math.
Best info I can find is that your card is consuming 230w.
Assuming you're paying 15¢/kWh, even gaming for 12 hours a day every day for a whole month will cost you $12.59. Doing the same with a gtx980 (165w) would cost you $9.03/month.
So you'd be paying maybe $580 to save $3.56 a month.
LaughingTarget - Friday, September 19, 2014 - link
There is a major difference between market capitalization and available capital for investment. Market Cap is just a rote multiplication of the number of shares outstanding by the current share price. None of this is available for company use and is only an indirect measurement of how well a company is performing. Nvidia has $1.5 billion in cash and $2.5 billion in available treasury stock. Attempting to match Intel's process would put a significant dent into that with little indication it would justify the investment. Nvidia already took on a considerable chunk of debt going into this year as well, which would mean that future offerings would likely go for a higher cost of debt, making such an investment even harder to justify.While Nvidia is blowing out AMD 3:1 on R&D and capacity, Intel is blowing both of them away, combined, by a wide margin. Intel is dropping $10 billion a year on R&D, which is a full $3 billion beyond the entire asset base of Nvidia. It's just not possible to close the gap right now.
Silma - Saturday, September 20, 2014 - link
I don't think you realize how many billion dollars you need to spend to open a 14 nm factory, not even counting R&D & yearly costs.It's humongous, there is a reason why there are so few foundries in the world.
sp33d3r - Saturday, September 20, 2014 - link
Well, if the NVIDIA/AMD CEOs is blind enough and cannot see it coming, then intel are gonna manufacture their next integrated graphics on a 10 or 8 nm chip and though immature will be a tough competition to them in terms of power and efficiency and even weight.remember currently pcs load integrated graphics as a must by intel and people add third party graphics only 'cause intels is not good enough literally adding weight of two graphics cards (Intels and third partys) to the product. Its all worlds apart more convenient when integrated graphics outperforms or able to challenge third party GPUs, we would just throw away NVIDIA and guess what they wont remain a monopoly anymore rather completely wiped out
Besides Intels integrated graphics are getting more mature in terms of not just die size with every launch, just compare 4000s with 5000s, it wont be long before they catch up.
wiyosaya - Friday, September 26, 2014 - link
I have to agree that it is partly not about the verification cost breaking the bank. However, what I think is the more likely reason is that since the current node works, they will try to wring every penny out of that node. Look at the prices for the Titan Z. If this is not an attempt to fleece the "gotta have it buyer," I don't know what is.Ushio01 - Thursday, September 18, 2014 - link
Wouldn't paying to use the 22nm fabs be a better idea as there about to become under used and all the teething troubles have been fixed.Frenetic Pony - Friday, September 19, 2014 - link
This is the most likely thing to happen, as the transition to 14nm takes place for intel over the next 6 months those 22nm fabs will sit empty. They could sell capacity at a similar process to TSMC's latest while keeping their advantage at the same time.nlasky - Friday, September 19, 2014 - link
Intel uses the same Fabs to produce 14nm as it does to produce 22nmlefty2 - Friday, September 19, 2014 - link
I can see Nvidia switching to Intel's 14nm, however Intel charges a lot more than TSMC for it's foundry services (because they want to maintain their high margins). That would mean it's only economical for the high end cardsSeanJ76 - Friday, September 19, 2014 - link
What a joke!!!! 980GTX doesn't even beat the previous year's 780ti??? LOL!! Think I'll hold on to my 770 SC ACX Sli that EVGA just sent me for free!!Margalus - Friday, September 19, 2014 - link
uhh, what review were you looking at? or are you dyslexic and mixed up the results between the two cards?eanazag - Friday, September 19, 2014 - link
Nvidia would get twice as many GPUs per wafer on a 14nm process than 28nm. Maxwell at 14nm would blow Intel integrated and AMD out of the water in performance and power usage.That simply isn't the reality. Samsung has better than 28nm processes also. This type of partnership would work well for Nvidia and AMD to partner with Samsung on their fabs. It makes more sense than Intel because Intel views Nvidia as a threat and competitor. There are reasons GPUs are still on 28nm and it is beyond process availability.
astroidea - Friday, September 19, 2014 - link
They'd actually get four times more since you have to considered the squared area. 14^2*4=28^2emn13 - Saturday, September 20, 2014 - link
Unfortunately, that's not how it works. A 14nm process isn't simply a 28nm process scaled by 0.5; different parts are scaled differently, and so the overall die area savings aren't that simple to compute.In a sense, the concept of a "14nm" process is almost a bit of a marketing term, since various components may still be much larger than 14nm. And of course, the same holds for TSMC's 28nm process... so a true comparison would require more knowledge that you or I have, I'm sure :-) - I'm not sure if intel even releases the precise technical details of how things are scaled in the first place.
bernstein - Friday, September 19, 2014 - link
no because intel is using their 22nm for haswell parts... the cpu transition ends in a year with the broadwell xeon-ep... at which point almost all the fabs will either be upgraded or upgrading to 14nm and the rest used to produce chipsets and other secondary die'snlasky - Saturday, September 20, 2014 - link
yes but they use the same fabs for both processesViewgamer - Friday, September 19, 2014 - link
To Ryan Smith. How can the GTX 980 possibly have a 165W TDP when it actually consumes 8 watts more than the 195W TDP GTX 680 !? please explain ? did Nvidia just play games with the figures to make them look more impressive ?ArmedandDangerous - Friday, September 19, 2014 - link
TDP =/= Power consumption although they are related. TDP is the amount of heat it will output.Carrier - Friday, September 19, 2014 - link
You're right, power consumption and heat output are related. That's because they're one and the same! What else could that electricity be converted to? Light? A massive magnetic field? Mechanical energy? (The fan, slightly, but the transistors aren't going anywhere.)Laststop311 - Friday, September 19, 2014 - link
no they aren't the same. Not all the electricity used is converted to heat. This is where the word EFFICIENCY comes into play. Yes it is related in a way but maxwell is more efficient with the electricity it draws using more of it and losing less of it to converted heat output. It's all in it's design.bernstein - Friday, September 19, 2014 - link
bullshit. since a gpu doesn't do chemical nor mechanical transformations all the energy used is converted to heat (by way of moving electrons around). efficiency in a gpu means how much energy is used for a fixed set of calculations (for example: flops)Senpuu - Friday, September 19, 2014 - link
It's okay to be ignorant, but not ignorant and belligerent.bebimbap - Friday, September 19, 2014 - link
there is "work" being done, as transistors have to "flip" by use of electrons. Even if you don't believe that "input energy =\= output heat" think of it this way100w incandescent bulb produces X amount of useful light
18w florescent bulb also produces X amount of useful light
in this sense the florescent bulb is much more efficient as it uses only 18w to produce the same light as the 100w incandescent. so if we say they produce the same amount of heat, then
100w florescent would produce ~5x the light of a 100w incandescent.
Laststop311 - Saturday, September 20, 2014 - link
ur so smart broGuspaz - Friday, September 19, 2014 - link
The power draw figures in this article are overall system power draw, not GPU power draw. Since the 980 offers significantly more performance than the 680, it's cranking out more frames, which causes the CPU to work harder to keep up. As as result, the CPU power draw increases, counteracting the benefits of lower GPU power draw.Carrier - Friday, September 19, 2014 - link
I don't think that can explain the whole difference. It performs similarly to a 780 Ti in Crysis 3, so the difference in power consumption can only come from the card. The 980 is rated 85W less in TDP but consumes only 68W less at the wall. The discrepancy gets worse when you add losses in the power supply.My guess is the TDP is rated at nominal clock rate, which is cheating a little because the card consistently runs much higher than nominal because of the boost.
kron123456789 - Friday, September 19, 2014 - link
Look at "Load Power Consuption — Furmark" test. It's 80W lower with 980 than with 780Ti.Carrier - Friday, September 19, 2014 - link
Yes, but the 980's clock is significantly lowered for the FurMark test, down to 923MHz. The TDP should be fairly measured at speeds at which games actually run, 1150-1225MHz, because that is the amount of heat that we need to account for when cooling the system.Ryan Smith - Friday, September 19, 2014 - link
It doesn't really matter what the clockspeed is. The card is gated by both power and temperature. It can never draw more than its TDP.FurMark is a pure TDP test. All NVIDIA cards will reach 100% TDP, making it a good way to compare their various TDPs.
Carrier - Friday, September 19, 2014 - link
If that is the case, then the charts are misleading. GTX 680 has a 195W TDP vs. GTX 770's 230W (going by Wikipedia), but the 680 uses 10W more in the FurMark test.I eagerly await your GTX 970 report. Other sites say that it barely saves 5W compared to the GTX 980, even after they correct for factory overclock. Or maybe power measurements at the wall aren't meant to be scrutinized so closely :)
Carrier - Friday, September 19, 2014 - link
To follow up: in your GTX 770 review from May 2013, you measured the 680 at 332W in FurMark, and the 770 at 383W in FurMark. Those numbers seem more plausible.Ryan Smith - Saturday, September 20, 2014 - link
680 is a bit different because it's a GPU Boost 1.0 card. 2.0 included the hard TDP and did away with separate power targets. Actually what you'll see is that GTX 680 wants to draw 115% TDP with NVIDIA's current driver set under FurMark.Carrier - Saturday, September 20, 2014 - link
Thank you for the clarification.wanderer27 - Friday, September 19, 2014 - link
Power at the wall (AC) is going to be different than power at the GPU - which is coming from the DC PSU.There are loses and efficiency difference in converting from AC to DC (PSU), plus a little wiggle from MB and so forth.
solarscreen - Friday, September 19, 2014 - link
Here you go:http://books.google.com/books?id=v3-1hVwHnHwC&...
PhilJ - Saturday, September 20, 2014 - link
As stated in the article, the power figures are total system power draw. The GTX980 is throwing out nearly double the FPS of the GTX680, so this is causing the rest of the system (mostly the CPU) to work harder to feed the card. This in tun drives the total system power consumption up, despite the fact the GTX980 itself is drawing less power than the GTX680.squngy - Wednesday, November 19, 2014 - link
It is explained in the article.Because GTX980 makes so many more frames the CPU is worked a lot harder. The W in those charts are for the whole system so when the CPU uses more power it makes it harder to directly compare GPUs
galta - Friday, September 19, 2014 - link
The simple fact is that a GPU more powerful than a GTX 980 does not make sense right now, no matter how much we would love to see it.See, most folks are still gaming @ 1080, some of us are moving up to 1440. Under this scenarios, a GTX 980 is more than enough, even if quality settings are maxed out. Early reviews show that it can even handle 4K with moderate settings, and we should expect further performance gains as drivers improve.
Maybe in a year or two, when 4K monitors become more relevant, a more powerful GPU would make sense. Now they simply don't.
For the moment, nVidia's movement is smart and commendable: power efficiency!
I mean, such a powerful card at only 165W! If you are crazy/wealthy enough to have two of them in SLI, you can cut your power demand by 170W, with following gains in temps and/or noise, and and less expensive PSU, if you're building from scratch.
In the end, are these new cards great? Of course they are!
Does it make sense to up-grade right now? Only if you running a 5xx or 6xx series card, or if your demands have increased dramatically (multi-monitor set-up, higher res. etc.).
Margalus - Friday, September 19, 2014 - link
A more powerful gpu does make sense. Some people like to play their games with triple monitors, or more. A single gpu that could play at 7680x1440 with all settings maxed out would be nice.galta - Saturday, September 20, 2014 - link
How many of us demand such power? The ones who really do can go SLI and OC the cards.nVidia would be spending billions for a card that would sell thousands. As I said: we would love the card, but still no sense
Again, I would love to see it, but in the forseeable future, I won't need it. Happier with noise, power and heat efficiency.
Da W - Monday, September 22, 2014 - link
Here's one that demands such power. I play 3600*1920 using 3 screens, almost 4k, 1/3 the budget, and still useful for, you know, working.Don't want sli/crossfire. Don't want a space heater either.
bebimbap - Saturday, September 20, 2014 - link
gaming at 1080@144 or 1080 with min fps of 120 for ulmb is no joke when it comes to gpu requirement. Most modern games max at 80-90fps on a OC'd gtx670 you need at least an OC'd gtx770-780. I'd recommend 780ti. and though a 24" 1080 might seem "small" you only have so much focus. You can't focus on periphery vision you'd have to move your eyes to focus on another piece of the screen. the 24"-27" size seems perfect so you don't have to move your eyes/head much or at all.the next step is 1440@144 or min fps of 120 which requires more gpu than @ 4k60. as 1440 is about 2x 1080 you'd need a gpu 2x as powerful. so you can see why nvidia must put out a powerful card at a moderate price point. They need it for their 144hz gsync tech and 3dvision
imo the ppi race isn't as beneficial as higher refresh rate. For TVs manufacturers are playing this game of misinformation so consumers get the short end of the stick, but having a monitor running at 144hz is a world of difference compared to 60hz for me. you can tell just from the mouse cursor moving across the screen. As I age I realize every day that my eyes will never be as good as yesterday, and knowing that I'd take a 27" 1440p @ 144hz any day over a 28" 5k @ 60hz.
Laststop311 - Sunday, September 21, 2014 - link
Well it all depends on viewing distance. I use a 30" 2560x1600 dell u3014 to game on currently since it's larger i can sit further away and still have just as good of an experience as a 24 or 27 thats closer. So you can't just say larger monitors mean you can;t focus on it all cause you can just at a further distance.theuglyman0war - Monday, September 22, 2014 - link
The power of the newest technology is and has always been an illusion because the creation of games will always be an exercise in "compromise". Even a game like WOW that isn't crippled by console consideration is created by the lowest common denominator demographic in the PC hardware population. In other words... ( if u buy it they will make it vs. if they make it I will upgrade ). Besides the unlimited reach of an openworld's "possible" textures and vtx counts."Some" artists are of the opinion that more hardware power would result in a less aggressive graphic budget! ( when the time spent wrangling a synced normal mapped representation of a high resolution sculpt or tracking seam problems in lightmapped approximations of complex illumination with long bake times can take longer than simply using that original complexity ). The compromise can take more time then if we had hardware that could keep up with an artists imagination.
In which case I gotta wonder about the imagination of the end user that really believes his hardware is the end to any graphics progress?
ppi - Friday, September 19, 2014 - link
On desktop, all AMD needs to do is to lower price and perhaps release OC'd 290X to match 980 performance. It will reduce their margins, but they won't be irrelevant on the market, like in CPUs vs Intel (where AMD's most powerful beasts barely touch Intel's low-end, apart from some specific multi-threaded cases)Why so simple? On desktop:
- Performance is still #1 factor - if you offer more per your $, you win
- Noise can be easily resolved via open air coolers
- Power consumption is not such a big deal
So ... if AMD card at a given price is as fast as Maxwell, then they are clearly worse choice. But if they are faster?
In mobile, however, they are screwed big way, unless they have something REAL good in their sleeve (looking at Tonga, I do not think they do, as I am convinced AMD intends to pull off another HD5870 (i.e. be on the new process node first), but it apparently did not work this time around.)
Friendly0Fire - Friday, September 19, 2014 - link
The 290X already is effectively an overclocked 290 though. I'm not sure they'd be able to crank up power consumption reliably without running into heat dissipation or power draw limits.Also, they'd have to invest in making a good reference cooler.
ppi - Saturday, September 20, 2014 - link
AMD will not beat 980 (they probably could put some fight, but nVidia could always defend it easily, so why do that - it would just dilute prices). What is more important for them, that *on desktop*, AMD can still stay relevant in lower price buckets by offering more performance per $ (while relying on partners for custom open-air cooling and ignoring the power draw disadvantage).Kjella - Sunday, September 21, 2014 - link
You do realize what you said pretty much exactly mirrors what people said about AMD and CPUs a few years back? Just trying to offer value while your competitor is making more efficient chips is a dead end where you're soon so far behind in technology that it's not enough. Nobody wants a 220W CPU (FX-9370/9590) and if AMD needs to pull a 300+W GPU to compete with GTX 980 it'll be equally dead on arrival.ppi - Sunday, September 21, 2014 - link
Not really. When Core2 was released, pretty much entire AMD's lineup was made irrelevant (I still use my 7 years old mid-range Core2Duo and I know that AMD chips were not even for consideration back then). Now the fastest AMD's card is faster than 2nd fastest nVidia offering. Look at TR 2014 HW survey where 80% clearly enthusiasts buy stuff for less than $400. Die sizes are similar. Both companies are fabless and thus have access to the same processes (unlike competition with Intel).AMD of course HAS TO come up with something better than what they have now. And soon. My point was mainly that they should be able to survive this holiday season sort of okayish.
I expect that AMD is focusing their limited resources on 20nm part, but it apparently did not work as well as it did in times of HD-5000 and 7000 series. And Maxwell improvements are greater than what is achievable just with die shrink. So there's some hard work for AMD ahead. Given necessary lead time for such products, I doubt 300-series will be good enough (unless they were going nuts with efficiency after seeing 680).
I admire nVidia for a long time always covering weak spots in their products. It could be seen from times when they went against 3dfx, though FX-5000 and now of course they show how they learned from 480 era.
Silma - Saturday, September 20, 2014 - link
I fully agree.As long as Intel does not succeed better in smartphones & tablets, it probably doesn't fully utilize its manufacturing capacities.
It could begin with opening 22 nm to NVIDIA and 14nm in 2015.
Seriously though, I'm not sure why Intel still hasn't bought NVIDIA, except if it foresees troubles getting the deal accepted with regulators.
This would not Mirror the AMD's ATI acquisition. crap + crap = crap.
Outstanding + outstanding = awesome.
Notmyusualid - Saturday, September 20, 2014 - link
+1SanX - Sunday, September 21, 2014 - link
Intel should buy NVIDIA long ago but they are in lethargy all last dacadeRyan Smith - Thursday, September 18, 2014 - link
BTW, before anyone asks: we're still working to get images and charts in. 4 days is very little time for a 20K word article. So please hold on for a bit.boot318 - Thursday, September 18, 2014 - link
Where is the Overclocking results? Not done yet? I see the page but it is blank.RaistlinZ - Thursday, September 18, 2014 - link
Ditto. I can't see the overclocking page.chizow - Thursday, September 18, 2014 - link
And no 970 results?Ryan Smith - Thursday, September 18, 2014 - link
As noted in the article, we had a problem with our 970 sample that was not able to be resolved in time for this article. Otherwise I would have very much liked to have a 970 in this review.Sunrise089 - Friday, September 19, 2014 - link
"Focus on quality first, then timeliness second. There's value in both but there's more value in one." :(extide - Friday, September 19, 2014 - link
Yeah guys, seriously just make the article live a little bit late!hpglow - Friday, September 19, 2014 - link
The boss quits and all you guys around running around the office with your shirts off screaming at the top of your lungs? The review could have waited and hour or two so that it was done, now I'm not even going to finish reading it.iLovefloss - Friday, September 19, 2014 - link
They've been doing this since forever. If you look at the comments from the R9 290X launch review, people were complaining about the same thing for example.Sunrise089 - Friday, September 19, 2014 - link
Including me. It was unacceptable clIck-baiting then and it still is. Interestingly enough it's not a site-wide issue. Surface Pro 3 and Devils Canyon both had long waits for ultimately excellent reviews. iPhone 6 will no doubt be a very popular review and yet Joshua or whoever didn't push it online at midnight. For whatever reason though GPU reviews get this weird 'rush to publish, fill in content later' pattern.djscrew - Friday, September 19, 2014 - link
diva much? jeez give it a restnathanddrews - Friday, September 19, 2014 - link
This is not the first time AT has done this, there have been many other incomplete reviews published over the years (decades).chizow - Friday, September 19, 2014 - link
@hpglow, in Ryan's defense, it was a short turnaround from the press briefing and this has happened in the past. Usually AT's articles focus heavily on the technical aspects also (which is greatly appreciated throughout the industry) and he also gets help from the rest of the staff to stitch the review together, so it is understandable that it is sometimes uploaded piecemeal.I would rather have something that is eventually updated that stands the test of time, vs. something that is rushed out hastily.
SodaAnt - Friday, September 19, 2014 - link
You think that it would only take an hour or two to get a gpu somehow, run dozens of tests on it, put those tests into tables, put those tables onto pages, then write another few thousand words on those tests?bernstein - Friday, September 19, 2014 - link
it's nice having one article with a full review, & it's nice to have early partial results... so in the future if publishing with missing content PLZ put in a big fat bold disclaimer:xyz content missing, update coming on 2.2.2222
chizow - Friday, September 19, 2014 - link
@Ryan, thanks for the update, sorry I just scanned through and didn't see the subtext mentioning your issues with the 970. Looking forward to updated results once you get some good samples.nevertell - Friday, September 19, 2014 - link
You can't read through the article in one sitting yet you complain about the article being rushed ?chizow - Sunday, September 21, 2014 - link
@nevertell, not sure if that comment was directed at me, but I never read through the entire article in the first sitting, especially in this case where I was actually in the market to buy one of these cards and might need to make a quick buying decision. I generally look at results and jump around a bit before going back to read the entire article, and I did not see any subtext on why the 970 wasn't included on this page about "Launching Today":http://www.anandtech.com/show/8526/nvidia-geforce-...
I expected to see something about why the 970 wasn't launching today, staggered launch, didn't get review sample etc but did not see anything, so I asked bc I saw Ryan was attending the comments here and might get a quick response.
boot318 - Thursday, September 18, 2014 - link
Bye, AMD!Amazing card(s) Nvidia bought to market! I've already seen a couple of reviews showing this monster overclocking over 1450+. Just think about when Nvidia drops a big die version........ :)
dragonsqrrl - Thursday, September 18, 2014 - link
AMD is by no means out of it. They're still very competitive in terms of performance, however they're far behind in terms of efficiency, which means to compete with the 980 they'll likely have to launch a far higher TDP card that requires more exotic cooling and will almost certainly be more expensive to manufacture. Even when you take the 285 into consideration, which offers 280 level performance at greatly reduced TDP, it's still at a higher TDP then the 980 which now outperforms the 290X by ~15%. And this isn't even taking noise, build quality, or features into consideration... Not a good position for AMD, in fact it's somewhat reminiscent of their processors (minus the competitive performance part)."Just think about when Nvidia drops a big die version........ :)"
Fortunately for AMD that's just not going to happen on 28nm, otherwise I might be inclined to agree with you. They still have a very real competitive chance with their upcoming cards.
arbit3r - Thursday, September 18, 2014 - link
O god really? 285 has greately reduced TDP? um 280 had a 200watt TDP, the 285 is 190, 10 watts less i wouldn't call that greatly reduced. Before you say 280 had 250watt tdp, no that is the 280x.dragonsqrrl - Friday, September 19, 2014 - link
I haven't done much searching around, but according to Anandtech's review of the 285, the 280 has a 250W TDP.http://www.anandtech.com/show/8460/amd-radeon-r9-2...
arbit3r - Friday, September 19, 2014 - link
plenty sites i know of say its 200, so if there is that much misinfo then likely AMD at fault for that one. Seeing a lot of reviews put real world power usage around 20watts difference.Ryan Smith - Friday, September 19, 2014 - link
For the record, 250W for R9 280 comes directly from AMD's reviewer's guide for that product.hojnikb - Friday, September 19, 2014 - link
7950 (which was then rebranded to 280) had 200W. With 280, they obviously upped the TDP for longer turbo speeds.ArtForz - Saturday, September 20, 2014 - link
Wasn't the 280 more of a rebranded 7950 boost (925 turbo), and not a 7950 (825, no turbo at all)?Mr Perfect - Friday, September 19, 2014 - link
True, but the 285 didn't live up to the 180 watt claim. Later in the article they showed it saving only 13 watts under load when compared to the 280. So more like 237 watts?http://www.anandtech.com/show/8460/amd-radeon-r9-2...
Which was really quite disappointing. I need something to cram in my mITX rig, and it has to be close to the 150 watts of the 6870 in there now.
Samus - Friday, September 19, 2014 - link
On a per-watt scale, AMD's GPU's are now as inefficient as their CPU's when compared to the competition. It's good they got those console contracts, because they probably won't be getting the next round if this keeps up.Absolutely amazing Maxwell is twice as efficient per watt as GCN 1.2
Laststop311 - Friday, September 19, 2014 - link
well looks like the gtx 970 is calling your name thenAlexvrb - Saturday, September 20, 2014 - link
That seems to depend on the design reviewed. THG tested a similarly clocked card by a different manufacturer and there was a much larger gap between the 280 and 285 in terms of power consumption.With that being said the 980 and 970 are both extremely fast and power efficient. Especially the 970 - if it really hits the market at around that pricing wow! Incredible value.
Strange that the 980 throttles so much at stock settings even outside of Furmark, first thing I'd do is go into the settings and fiddle a bit until it boosts consistently. But given its performance and it's not really a problem, and it can be remedied. Still, something to keep in mind especially when overclocking. I wonder how the 980 would have done with the beefier cooler from its higher-TDP predecessors, and some mild overvolting?
Laststop311 - Sunday, September 21, 2014 - link
If you look in the gaming benchmarks the gpu is hitting 80C. Nvidia's design does not allow the gpu to exceed 80C so it has to lower frequencies to stay at 80C. This is the consequence of using the titan blower cooler but removing the vapor chamber lowering its cooling capability. That's why I don't get why all these people are rushing to buy the reference design gtx 980's as they are all sold out. They are throttling by hundreds of mhz because the titan blower cooler without a vapor chamber sucks. Custom cooling options are going to make the gtx 980 able to reliably hit 1300-1400 mhz some probably even 1500 mhz under full load and still stay under the 80C limit. Keep an eye out for MSI's twin frozr V design. It's going to have a beefy radiator with 2x 100mm fans in an open air design allowing WAY more cooling potential then the reference design. The twin frozr V design should allow the card to OC and actually keep those OC frequencies under heavy load unlike the reference card which cant even keep up with its stock setting under intense gaming. We should see a pretty big performance jump going to custom coolers and the reference performance is already staggeringAlexvrb - Sunday, September 21, 2014 - link
Reviewers and "tech enthusiasts" alike jumped all over AMD when they didn't adequately cool their 290 cards. So while I don't disagree with what you're saying, I am just surprised that they would let it ship with such heavy throttling on ordinary games. Especially given that in this case it isn't because Nvidia shipped with a cooler that isn't sufficient - rather it's because by default the fan is running too slowly. Even without the vapor chamber, I bet it would be fine if they just turned up the fan just a hair. Not enough to make it loud, but enough to bring it in line with some of the other high-end cards here (under a load).Anyway I suspect the vapor chamber will return in a higher-end "980 Ti" type configuration. In the meantime, yeah I'd keep an eye out for high-end aftermarket designs with a more aggressive power delivery system and wicked cooling. There's no doubt these chips have serious potential! I'd bet an aggressive 970 could hit the market for under $400 with 980-like performance and a factory warranty. :D
I'd say "poor AMD" but this kind of leapfrogging is nothing new. Even if AMD can't come out with something really impressive in the next several months, they can always remain competitive by dropping prices. My GPU is idle outside of gaming so the actual difference in power consumption in terms of dollars is tiny. Now, for number-crunching rigs that run their GPUs 24/7... that's a different story altogether. But then again, AMD's professional cards have good DP numbers so it's kind of a wash.
Hixbot - Monday, September 22, 2014 - link
I'm very disappointed they got rid of the vapor chamber. I'm not a fan of the 3rd party coolers as they exhaust the air into the case (big deal for small form factor PCs). I prefer the blower cooler even though they are noisier, the loss of the vapor chamber is a big deal.Viewgamer - Friday, September 19, 2014 - link
LOL people screaming at the 285. It actually consumes less power than the 980 and 970 not more.Nvidia greatly understated the TDP of the 980 and 970 to put it lightly.
Both cards consume more power than the 250W TDP 7970 Ghz yet they're somehow rated at 165W and 145W how laughable !
http://i.imgur.com/nfueVP7.png
nathanddrews - Friday, September 19, 2014 - link
http://www.pcper.com/files/review/2014-09-18/power...kron123456789 - Friday, September 19, 2014 - link
Different tests, different results. That's nothing new.kron123456789 - Friday, September 19, 2014 - link
But, i still think that Nvidia isn't understated TDP of the 980 and 970.Friendly0Fire - Friday, September 19, 2014 - link
Misleading. If a card pumps out more frames (which the 980 most certainly does), it's going to drive up requirements for every other part of the system, AND it's going to obviously draw its maximum possible power. If you were to lock the framerate to a fixed value that all GPUs could reach the power savings would be more evident.Also, TDP is the heat generation, as has been said earlier here, which is correlated but not equal to power draw. Heat is waste energy, so the less heat you put out the more energy you actually use to work. All this means is that (surprise surprise) the Maxwell 2 cards are a lot more efficient than AMD's GCN.
shtldr - Wednesday, September 24, 2014 - link
"TDP is the heat generation, as has been said earlier here, which is correlated but not equal to power draw."The GPU is a system which consumes energy. Since the GPU does not use that energy to create mass (materialization) or chemical bonds (battery), where the energy goes is easily observed from the outside.
1) waste heat
2) moving air mass through the heatsink (fan)
3) signalling over connects (PCIe and monitor cable)
4) EM waves
5) degradation/burning out of card's components (GPU silicon damage, fan bearing wear etc.)
And that's it. The 1) is very dominant compared to the rest. There's no "hidden" work being done by the card. It would be against the law of conservation of energy (which is still valid, as far as I know).
Frenetic Pony - Friday, September 19, 2014 - link
That's a misunderstanding of what TDP has to do with desktop cards. Now for mobile stuff, that's great. But the bottlenecks for "Maxwell 2" isn't in TDP, it's in clockspeeds. Meaning the efficiency argument is useless if the end user doesn't care.Now, for certain fields the end user cares very much. Miners have apparently all moved onto ASIC stuff, but for other compute workloads any end user is going to choose NVIDIA currently, just to save on their electricity bill. For the consumer end user, TDP doesn't matter nearly as much unless you're really "Green" conscious or something. In that case AMD's 1 year old 290x competes on price for performance, and whatever AMD's update is it will do better.
It's hardly a death knell of AMD, not the best thing considering they were just outclassed for corporate type compute work. But for your typical consumer end user they aren't going to see any difference unless they're a fanboy one way or another, and why bother going after a strongly biased market like that?
pendantry - Friday, September 19, 2014 - link
While it's a fair argument that unless you're environmentally inclined the energy savings from lower TDP don't matter, I'd say a lot more people do care about reduced noise and heat. People generally might not care about saving $30 a year on their electricity bill, but why would you choose a hotter noisier component when there's no price or performance benefit to that choice.AMD GPUs now mirror the CPU situation where you can get close to performance parity if you're willing to accept a fairly large (~100W) power increase. Without heavy price incentives it's hard to convince the consumer to tolerate what is jokingly termed the "space heater" or "wind turbine" inconvenience that the AMD product presents.
Laststop311 - Friday, September 19, 2014 - link
actually the gpu's from amd do not mirror the cpu situation at all. amd' fx 9xxx with the huge tdp and all gets so outperformed by even the i7-4790k on almost everything and the 8 core i7-5960x obliterates it in everything, the performance of it's cpu's are NOT close to intels performance even with 100 extra watts. At least with the GPU's the performance is close to nvidias even if the power usage is not.TLDR amd's gpu situation does not mirror is cpu situation. cpu situation is far worse.
Laststop311 - Friday, September 19, 2014 - link
I as a consumer greatly care about the efficinecy and tdp and heat and noise not just the performance. I do not like hearing my PC. I switched to all noctua fans, all ssd storage, and platinum rated psu that only turns on its fan over 500 watts load. The only noise coming from my PC is my radeon 5870 card basically. So the fact this GPU is super quiet means no matter what amd does performance wise if it cant keep up noise wise they lose a sale with me as i'm sure many others.And im not a fanboy of either company i chose the 5870 over the gtx 480 when nvidia botched that card and made it a loud hot behemoth. And i'll just as quickly ditch amd for nvidia for the same reason.
Kvaern - Friday, September 19, 2014 - link
"For the consumer end user, TDP doesn't matter nearly as much unless you're really "Green""Or live in a country where taxes make up 75% of your power bill \
takeship - Friday, September 19, 2014 - link
The situation we have here now is very reminiscent of AMD's CPU position shortly after Core 2 hit the market. Nvidia now has a product with better performance, better efficiency, better (still) drivers & features, and similar pricing and that puts AMD in a bad way. The 3xx series had better seriously wow, or AMD's GPU division is quickly going to see the same market erosion that happened after Core 2/iCore. Personally, i think this is a knockout blow. - soon to be former 7970 ownerchizow - Thursday, September 18, 2014 - link
Maxwell is truly amazing stuff. Great advances from Nvidia in virtually every aspect.Not super thrilled about the 980 price at $550, the 970 price however is amazing at $329. I was going to go with the 980 but 2x970s seem more appealing. 970 is 13/16 SMXes but it retains the full 4GB, full 256-bit bus, full 64 ROPs. Hopefully there's a lot of 970s on the full 980 PCBs.
Jensen just confirmed the prices on the Live Stream.
shing3232 - Thursday, September 18, 2014 - link
they have efficiency advantage because because they use the best 28nm call hpm. they use high performance mobile process of course they are very efficientnkm90 - Friday, September 19, 2014 - link
From TSMCs website ( http://www.tsmc.com/english/dedicatedFoundry/techn... )"The 28nm High Performance Mobile Computing (HPM) provides high performance for mobile applications to address the need for applications requiring high speed. Such technology can provide the highest speed among 28nm technologies. With such higher performance coverage, 28HPM is ideal for many applications from networking, and high-end smartphone/ mobile consumer products."It looks like the hpm process was designed for chips that would dissipate much less than the 150-200W this one does. I seriously doubt someone would use hpm for such high power chips. Also some body had the voltages for the gm204 chip; and the idle voltage was closer to the 0.85V of hp than the 0.9V of HPM
chizow - Friday, September 19, 2014 - link
There's no indication they are using 28nm HPM, even the first Maxwell part (GM107) used 28nm HP and alluded to this amazing power/perf ratio we see today with GM104.It is obvious Nvidia's convergence of mobile *design* (not process) fundamentals helped them as we saw with Kepler, and this will only be further beneficial with their mobile Maxwell designs.
Sttm - Thursday, September 18, 2014 - link
Yeah I was looking at 970 results on other sites... its the 8800gt reborn! Almost top end performance, $220 in savings.chizow - Friday, September 19, 2014 - link
Yep the 970 is amazing price:perf, Newegg has them at the $330 price up to $350 for some custom/OC versions.I did end up going with a single 980 though. The difference in build quality is just too much and SLI with my new G-Sync monitor (Swift) have had issues with my current 670 SLI build. The scaling with SLI is also not exceptional with these Maxwell cards (~60%), so the improvement of 2x970 is actually not that much over a single highly overclocked 980.
Still amazing job by Nvidia, the 980 would have been a grand slam at $500 but it is still an Earl Weaver 3-run blast at $550.
uzun - Thursday, September 18, 2014 - link
When will these cards be available via newegg etc?arbit3r - Friday, September 19, 2014 - link
i would guess later tonight maybe tomorrow though that might be wrong.chizow - Friday, September 19, 2014 - link
They are available on Newegg.com now. Some SKUs are selling out now. I picked up two of the EVGA 980 SuperClocked models.Laststop311 - Saturday, September 20, 2014 - link
I'm going to wait for the custom gtx 980's. It was already throttling from reaching the 80C limit on most games. Blower design wouldn't of throttled if they left the vapor chamber in but they didnt. My case has plenty of airflow so i don't require a blower design. MSI twin frozr V open air design will cool the gpu much better and stop it from throttling during gaming. People rushing to buy the reference design are missing out on 100's of mhz due to thermal throttle.chizow - Saturday, September 20, 2014 - link
Yep the open-faced custom coolers are definitely better at OC'ing, especially in single-GPU configs, but the problems I have with them are:1) they tend to have cheaper build quality than the ref, especially the NVTTM cooler which is just classy stuff. The custom coolers replace this with lots and lots of plastic, visible heatpipes, cheapo looking fans. If I wanted an Arctic Accelero on my GPUs I would just buy one.
2) they usually take longer to come to market. Frequently +3-6 weeks lead time. I know its not a super long time in the grand scheme of things, but I'd rather upgrade sooner.
3) The blowers tend to do better in SLI over longer periods of time, and also don't impact your CPU temps/OC as much. I have a ton of airflow too (HAF-X) but I still prefer most of the heat being expelled from the start, and not through my H100i rad.
4) Frankly I'm not too worried about squeezing the last 100-150MHz out of these chips. There was a time I might have been, but I tend to stick it to a safe OC about 100-150MHz below what most people are getting and then call it a day without having to do a dozen 3DMark loops to verify stability.
Laststop311 - Sunday, September 21, 2014 - link
Did you see the benchmarks. Some games were running in the 900's some in the 1000's some in 1100's. Stuck at these frequencies because the card was riding the 80C limit. As the review mentioned these aren't the same titan coolers as they removed the vapor chamber and replaced it with regular heatpipes. Getting a custom cooled card isnt about squeezing the last 100-150 from an OC its about squeezing an extra 400-600 mhz from an OC as many reviewers have gotten the gtx 980 to OC to 1500mhz. We are talking a massive performance increase from getting the proper cooling bigger than even the r9 290x going from reference to custom and that was pretty big itself.Laststop311 - Sunday, September 21, 2014 - link
Even to get the card to reliably run at stock settings during intense gaming you need a custyom cooled card. The reference cooled card can't even reliably hit its stock clock under intense gaming because the blower cooler without vapor chamber sucks.chizow - Sunday, September 21, 2014 - link
No, you can adjust the Nvidia fan and GPU temp settings to get sustained Boosts. There is a trade-off in terms of fan noise and/or operating temps, but it is easy to get close to the results of the custom coolers at the expense of fan noise. I personally set my fan curve differently because I think Nvidia's 80C target temp profile is a little bit too passive in how quickly it ramps up fanspeeds. I don't expect to have any problems at all maintaining rated Boost speed, and if I want to overclock, I fully understand the sacrifice will be more fan noise over the custom coolers, but the rest of the negatives regarding custom coolers makes the reference cooler more appealing to me.venk90 - Thursday, September 18, 2014 - link
The GTX 980 page on NVIDIA website seems to indicate HDMI 1.4 as it says 3840*2160 at 30 Hz over HDMI (it is mentioned as a foot note). Are you sure about it being HDMI 2.0 ?Ryan Smith - Thursday, September 18, 2014 - link
Yes. I've confirmed it in writing and in person.vegitto4 - Thursday, September 18, 2014 - link
Hi Ryan, great review! There will be the usual HTPC perspective? For example, did they fix the 23.976 refresh rate as Haswell does? I think it's important to know how these work as htpc cards. RegardsRyan Smith - Thursday, September 18, 2014 - link
For this article there will not. These cards aren't your traditional HTPC cards. However we can possibly look into it for next week's follow-up.chizow - Friday, September 19, 2014 - link
I think the definition of HTPC is beginning to change though, and while these may not yet fit into traditional HTPC (Brix and NUC seem to be filling this niche more), they are definitely right in the SteamBox/BattleBox category.Honestly, SteamBox was the first thing that came to mind when I saw that 165W TDP on the GTX 980, we will be seeing a lot of GM204 variants in the upcoming years in SFF, LAN, SteamBox and gaming laptop form factors that is for sure.
rennya - Friday, September 19, 2014 - link
Any info what HDMI 2.0 level and HDCP version this nVidia GPU has? It can only be between these ywo; HDMI 2.0 level B HDCP 2.2 or HDMI 2.0 level A HDCP 2.0.If it was the former, gaming performance in 4K displays that use HDMI 2.0. If it was the latter, this card is DOA when Bluray 4K videos comes out.
khanov - Friday, September 19, 2014 - link
Ideally, we consumers want a full HDMI 2.0 spec transceiver with 20+Gbps bandwidth AND HDCP 2.2. But these are not yet available and that is why we see these stop-gap measures that manufacturers are calling HDMI 2.0 A and B. This should *hopefully* be resolved by next year.In the meantime, I'd also like to know what exactly Nvidia is offering in the 970 and 980 cards. Without HDCP 2.2 the compatibility of these cards with future TV and AV gear is questionable.
Ryan, are you able to clarify please?
Ryan Smith - Saturday, September 20, 2014 - link
I don't have any more information available. But I will look into it.AnnonymousCoward - Friday, September 26, 2014 - link
If the stupid TV industry had went with the existing and mature DisplayPort interface for 4K, we wouldn't have these stupid HDMI 2.0 problems.warisz00r - Thursday, September 18, 2014 - link
So is it viable to drop a 970 or 980 in a sub-600W setup?dishayu - Thursday, September 18, 2014 - link
The whole system peaks at 300W. Even a cheap, non 80+ certified 500W power supply should be able to deliver that much with complete safety.Alexvrb - Saturday, September 20, 2014 - link
I would NEVER buy a PSU that isn't at least a somewhat halfway decent 80+ certified unit. Especially if you're going to then combine it with expensive components. At this point anything that falls under the category of "Cheap non 80+" is as likely to explode and catch fire as it is to run with complete safety. :P$30 gets you a 500W 80+ certified unit anyway.
nandnandnand - Thursday, September 18, 2014 - link
"GM204 ends up weighing in at 5.2 billion transistors, with a die size of 398mm2. This compares to 3.54B transistors and a die size of 294mm2 for GK104, and 7.1B transistors for 551mm2 for GK110."Does this mean there will be a 10 billion transistor GM210
extide - Friday, September 19, 2014 - link
Probably, yes, but not @ 28nm ;)Laststop311 - Friday, September 19, 2014 - link
I think you are right. Unless there is another huge delay big maxwell will not see the light of day till 20nm. At 28nm the die size would be astronomical and thats too expensive.Stuka87 - Thursday, September 18, 2014 - link
Why is it you guys are still using reference 290/290X cards for testing? Are they even being sold anymore?Oh, and I am having trouble believing nVidias claimed power consumption. It's certainly lower than a 780ti or 290x but not the 90W that is claimed according to the graphs.
arbit3r - Friday, September 19, 2014 - link
looked at a few review sites, and its least 90watts some sites its a lot more but one that is closest is still about 90. AMD has been known to under state the true draw of their cards and even their CPU's. It tends to be higher then they say.Ryan Smith - Friday, September 19, 2014 - link
We use reference cards whenever possible. And yes, the reference 290X is still being sold.That said, we include both normal and uber modes for this reason. Uber mode will be comparable to an open air (custom) 290X.
Stuka87 - Friday, September 19, 2014 - link
Except when it comes to noise and heat, the open air cards are significantly better there. But I understand why you use reference cards.bill5 - Thursday, September 18, 2014 - link
Barely beating AMD's ancient R9 290X...doesn't look good for Nvidia's new generation considering AMD's new line is due soon at the high end. Yet they barely need it as their old line is close to competitive already!arbit3r - Friday, September 19, 2014 - link
it beats it using like 75-100+watts less power. On top of being overclocking monster where as AMD's card doesn't Overclock so much.kron123456789 - Friday, September 19, 2014 - link
I think the new top-end AMDs card will have TDP around 350-400W and have a cooler similar to 295X2 :)jase240 - Thursday, September 18, 2014 - link
Many of the pages are blank and show nothing for me, including the overclocking section.Stuka87 - Thursday, September 18, 2014 - link
If you read the first page, Ryan stated that things are still being uploaded.SirMaster - Friday, September 19, 2014 - link
Where are the Overclocking results?CrystalBay - Friday, September 19, 2014 - link
Are these parts going to be full DX12 compliant ?extide - Friday, September 19, 2014 - link
Noarbit3r - Friday, September 19, 2014 - link
yes, it says it is.rahvin - Friday, September 19, 2014 - link
14nm is the first process node since they started building IC's that costs more than the prior generation. It would appear that the days of continually dropping prices are gone if this remains true. People really fail to realize how significant this is and it's likely to have profound impacts on the industry for the foreseeable future. It's quite possible that process advances will slow down dramatically as a result of the higher costs because the producers are no longer guaranteed a lower priced product for the same input. Continually improving products at lower prices may be a thing of the past in the CPU space.vailr - Friday, September 19, 2014 - link
Will there also later be a Maxwell GPU successor to nVidia's GTX 760?extide - Friday, September 19, 2014 - link
Yes, of coursemartixy - Friday, September 19, 2014 - link
The universe is oddly self-correcting when it comes to those types of things. The hiccup in the process node seemingly comes at just the right time, to kick designer's asses into letting go of that mad dash for transistors and focus on intelligent and efficient design instead.Vinny DePaul - Friday, September 19, 2014 - link
Does it mean it is time for me to sell my GTX 770 and buy GTX 970?wolfman3k5 - Friday, September 19, 2014 - link
Why would anyone buy your GTX 770 when they could easily be buying a GTX 970 now?!Laststop311 - Friday, September 19, 2014 - link
Vinny that depends. If you are gaming at 1080 res there is no point. At 1440 it would depend on the game but for the most part the 770 is still fine. I don't think you should waste your money but to each his own.Nfarce - Friday, September 19, 2014 - link
Well that settles it then. I'll take two 870s for SLI (Guru3D has benches for it). Now the question remains what the vendors will actually sell them for and what the various versions will go for (factory overclocked, etc.). On top of the question of how hot an item these cards will be. I remember trying to get a 680 for nearly two weeks from the Egg just hitting page refresh randomly and hoping to catch a moment they were back in stock.HaB1971 - Friday, September 19, 2014 - link
Newegg has the 970 for $339 on the front page of the Desktop video cards page... you can't search for them though oddly enoughInfy2 - Friday, September 19, 2014 - link
GTX 980 leaves TDP room for even faster card targeting 250W and beyond, but will there be one coming?Mr Perfect - Friday, September 19, 2014 - link
Seeing as this is only a GM204 part, it's probably safe to say there will be a GM210 part in the higher wattage ranges. Probably some damn fool $1000 Titan card...Laststop311 - Friday, September 19, 2014 - link
probably will see it as the titan 2 at 1000 first then a cheaper non titan version just like they did with gk110. Not until 20nm is sorted out thoevilspoons - Friday, September 19, 2014 - link
So basically the 980 is my 680 SLI setup, but on a single card, plus some new features. And it's quieter and uses less power... And then you can SLI it later. Awesome!!Nfarce - Friday, September 19, 2014 - link
Yup! Just like my original single 680 was equal to two 570s. This wait was worth skipping the 7-series over. I have a feeling there are going to be a whole lot of 670s and 680s flooding the used market starting Friday.lilmoe - Friday, September 19, 2014 - link
TSMC 28nm is showing its age...CristianMataoanu - Friday, September 19, 2014 - link
I see the 980 is rated at 165W while the 680 is rated at 195W, But in the test results show that the total system consumption using the 980 is 8W higher than the one using the 680. I understand there may be some variantion of the cpu power because the 980 pushes more fps and therefore there is more work for the cpu.Is it posible that the 680 is less efficient in using its resources and therefore not being able to hit max tdp?
jay401 - Friday, September 19, 2014 - link
On the Power, Temp, and Noise page, the chart labeled "GeForce GTX 980 Average Clockspeeds" shows a label reading "Battlefield 3" instead of "Battlefield 4". =)Ryan Smith - Friday, September 19, 2014 - link
D'oh! Thanks.Mr Perfect - Friday, September 19, 2014 - link
What revision Display Ports are those, 1.2 or 1.3? I see that the 1.3s had a launch article on AT earlier this week, any chance they made it into Maxwell along with HDMI 2?Mr Perfect - Friday, September 19, 2014 - link
Oh, they're 1.2. It's listed on the 980 board review, but I was looking in the architecture section.martixy - Friday, September 19, 2014 - link
Page 6(Better AA: Dynamic Super Resolution & Multi-Frame Sampled Anti-Aliasing) of the article is MISSING(for me only?).What's up with that?
Mr Perfect - Friday, September 19, 2014 - link
Missing here too, they must still be uploading things.wolfman3k5 - Friday, September 19, 2014 - link
They should fix their back-end database issues...Laststop311 - Friday, September 19, 2014 - link
read the entire article and maybe you would know whyHarnser - Friday, September 19, 2014 - link
Are we going to see a Maxwell 2 version of the GTX 750 Ti? HDMI2 and the better AA would be great additions.Samus - Friday, September 19, 2014 - link
Ryan,Page 3, 2nd/3rd paragraph refer to GM107 when you meant to refer to GM204
;)
Ranger90125 - Friday, September 19, 2014 - link
The R290X was branded "Loud" in Anandtech's review (comment subsequently removed) and while token gestures of approval were made in that article, Mr Smith did seem to go out of his way to under-emphasise the strong points of that card. Now we have the "Mighty Maxwell," review, and unsurprisingly Mr Smith is waxing lyrical about the virtues of his favourite company's flagship GPU. Any reader who has read Anandtech's reviews of the R290X and GTX980 must surely recognise the blatant bias in Nvidia's favour. Having been an avid reader of Anandtech's articles for many years, one hopes the recent departure of it's founding father will not precipitate an all time low in the quality of Anandtech's GPU coverage. I encourage Mr Smith to temper his bias in forthcoming Gpu reviews, if only to prevent an exodus of Anandtech's faithful to sites that are capable of providing a balanced perspective on the ongoing battle between Nvidia and AMD GPUs.
Dribble - Friday, September 19, 2014 - link
Looks at prominently placed "AMD CENTRE" link at top of page.wolfman3k5 - Friday, September 19, 2014 - link
Don't forget, this is a prominent pro Intel/NVIDIA site. What did you expect?!Samus - Friday, September 19, 2014 - link
That's because everything AMD touches is a joke. ATI was at the top of their game, far ahead of NVidia on price:performance, then AMD buys them. Look where ATI's been at since. GCN is a crappy architecture. The Netburst of GPU's. Ridiculously high power consumption. The only corner market is has is FP64. If you actually game, GCN is a dud. The only reason Sony/Microsoft went with AMD GPU's was because it's the best (only) integrated CPU/GPU solution (something NVidia has no IP) other than Intel which is way too expensive for consoles and still not as good.bwat47 - Friday, September 19, 2014 - link
Until maxwell AMD's gpu's were handily outperfoming nvidia when it came to price/performance. GCN isn't a bad architecture, its just outdated compared to nvidia's brand new one. I'm sure AMD has architecture improvements coming down the road too. This happens all the time with nvidia and AMD where one of them leapfrogs the other in architecture, the cycle will continue. Right now nvidia is ahead with maxwell, but acting like AMD is doomed or saying that their architecture is the 'netburst of gpu's is silly.Laststop311 - Sunday, September 21, 2014 - link
AMD is going to have to perform a miracle to get their performance and energy use close to nvidia. AMD may keep up in performance but they are going to do that with a 300+ watt dual 8 pin required + liquid cooling required or triple slot required cause it's going to hog the power to keep up and surpass nvidia. Nvidia's solution is much more elegant and there is little hope for amd to match thatLaststop311 - Sunday, September 21, 2014 - link
This isnt even counting gm210. When big maxwell drops at 20nm I just don't see how amd will be able to match it. Like I said it will take an engineering miracle from amdYojimbo - Friday, September 19, 2014 - link
There's a difference between bias and impression. At what point does what you call "tempering bias" become a source of bias in and of itself, because one cannot give one's impression? For instance, the removal of branding the R290X reference card "loud" (which it is). I encourage Mr. Smith neither to "temper his bias" nor to get involved in the "ongoing battle between Nvidia and AMD GPUs", and instead to report the facts and relay his honest impression of these facts. If he does this well, I think the majority of readers of this site will support him for attempting to give a real representation of the state of the products available, which is what they are presumably looking for.wolfman3k5 - Friday, September 19, 2014 - link
Looks like these GTX 970 and 980 cards are shit when it comes to compute, especially double precision floating point operations. I don't game, so I don't care about FPS. I do more productive things with video cards though.Laststop311 - Friday, September 19, 2014 - link
than maybe you should buy a workstation card and not a gaming card or at the very least a titan if you cant afford the super outrageously priced quadro's and tesla'sNfarce - Sunday, September 21, 2014 - link
Then you simply aren't the target market for these types of cards. Like Lastop311 said...go bend over for a professional level workstation GPU that most people here care nothing about. :-/AnnonymousCoward - Friday, September 26, 2014 - link
Shit? It dominated half the compute tests by a large margin.Laststop311 - Friday, September 19, 2014 - link
I have to admit I am greatly impressed. So impressed i don't know if I will wait for big maxwell. For a 2560x1600 gamer a single gtx 980 runs everything incredibly well all while being quiet. Custom coolers are going to boost the performance even more considering it was throttling back hitting the 80C limit. MSI twin frozr V cooling should really unleash its full performance. I almost pulled the trigger on a gtx 780ti too. Now instead of spending 700 I spend 550 and get better in every aspect.PS My current pc is a gulftown i7-980x + radeon 5870 and the gpu is forcing me to either run at a non native resolution or turn down graphics level of some games as I didn't have a 2560x1600 monitor when I first got the PC 5870 was fine for 1080 not so for 1600. GTX 980 will be a huuuuuuuuge performance boost. The cpu is still rly fast OC'd to 4.2ghz 6 cores/12 threads. Thought about goin for an x99 system and was going to until i saw that lovely 55" LG oled that can be found for 3000. So I'll just wait for skylake-e before i upgrade the cpu side.
bleh0 - Friday, September 19, 2014 - link
X99, 5820k, sli 970s....Oh man I can't wait.dj_aris - Friday, September 19, 2014 - link
Useless stuff.Today's cards are still kinda underpowered for 4K, while too overpowered for 1080p. You don't need anything more than a GTX770 / R9280X to play anything maxed out in 1080p.Salvor - Friday, September 19, 2014 - link
You could try 1440p or 1600p instead of 2160p, that's the sweet spot for these cards probably. A nice increase in pixels from 1080p, looks really nice.Makaveli - Friday, September 19, 2014 - link
The performance this card provides at 1440p in impressive and this would be the card I would buy if I was on a 27 or 30` monitor. As for 4k these card are still to slow unless you Xfire of SLI.Also impressed with the compute performance. I will still hold on to my 7970Ghz which is still faster in Sony Vegas pro but like the improvement on the NV side in this area.
I'm still playing at 1200p so there hasn't been any need to upgrade my GPU i'm hoping by that time these nodes issues will be resolved so I can get my 60-75% boost in gpu performance before I upgrade.
piroroadkill - Friday, September 19, 2014 - link
Haha, if you think the 970 is useless, it beats out the Radeons whilst being cheaper, an instant change in the market, you are insane.If you think people care about 3840×2160, they don't. If you do, buy a Radeon 295X2 and be done with it. It can do it, but it's ropey at times, no doubt.
nathanddrews - Friday, September 19, 2014 - link
You're completely forgetting about the people that prefer frame rates over 60. There's plenty of room for improvement at 1080p to fully support the 120/144Hz market. 1080p over 120Hz is more important to me than 4K.NA1NSXR - Friday, September 19, 2014 - link
970 w/aftermarket cooler + OC = win for $330 ish.yannigr2 - Friday, September 19, 2014 - link
285, 285X, 300 series DOA.Nvidia managed to do an HD4850 to AMD and take revenge for what AMD did to them at 2008.
Drunktroop - Sunday, September 21, 2014 - link
It is more like the G92s - 8800GT and 8800GTS 512 than HD4850 IMO.atl - Friday, September 19, 2014 - link
In compute benchmark, i would appreciate some digital currency tests pack.Ryan Smith - Friday, September 19, 2014 - link
Could you please expand on that request?GPUs have been supplanted by FPGAs and ASICs for both Bitcoin (SHA-25) and Litecoin (scrypt). At this point I'm not convinced cryptocoin processing speeds are going to be relevant.
ABR - Friday, September 19, 2014 - link
Maybe not relevant for professional bitcoin miners as such, but a good benchmark of cryptography / big integer number crunching capabilities where a fair amount of effort has gone into optimizing the software for the hardware.Lux88 - Friday, September 19, 2014 - link
I want to thank Ryan for using FahBench.rickon66 - Friday, September 19, 2014 - link
I'm interested in 4K hardware, currently using 1600P Dell with a 780Ti OC and want to move up to 2160P. I'm hoping this 980 is fully compliant with HDMI 2.0!theMillen - Friday, September 19, 2014 - link
you realize that if you're hoping for 2.0 compliance for a 4k monitor, you will want to use displayport.Hrobertgar - Friday, September 19, 2014 - link
It sounds silly, but the pictures of the restaurant in the Voxel discussion look exactly like Poncho Villa's in Redondo Beach, Cali. That place was my favorite Mexican restaurant when I lived in Cali, and I always enjoyed Sunday Brunch there.Peichen - Friday, September 19, 2014 - link
The one with the blue/white checkered table cloth? Maybe I will remember this when visiting CA in the future.SydneyBlue120d - Friday, September 19, 2014 - link
I'd like to read in future testing how the new cards handle 4K60 footage in both HEVC and VP9 format. Here http://www.reduser.net/forum/showthread.php?111230... You can find some 4K materials e.g.atlantico - Friday, September 19, 2014 - link
I'm sorry, but I couldn't care less about power efficiency on an enthusiast GPU unit. The 780Ti was a 250W card and that is a great card because it performs well. It delivers results.I have a desktop computer, a full ATX tower. Not a laptop. PSUs are cheap enough, it's even a question of that.
So please, stuff the power requirements of this GTX980. The fact is if it sucked 250W and was more powerful, then it would have been a better card.
A5 - Friday, September 19, 2014 - link
They'll be more than happy to sell you a $1000 GM210 Titan Black Ultra GTX, I'm sure.Fact is that enthusiast cards aren't really where they make their money anymore, and they're orienting their R&D accordingly.
Fallen Kell - Friday, September 19, 2014 - link
Exactly. Not only that, the "real" money is in getting the cards in OEM systems which sell hundreds of thousands of units. And those are very power and cooling specific.Antronman - Sunday, September 21, 2014 - link
Yep, yep, and yep again.For OEMs, the difference between spending 10 more or less dollars is huge.
More efficient cards means less power from the PSU. It's one of the reasons why GeForce cards are so much more popular in OEM systems.
I have to disagree with the statement about enthusiast cards not being of value to Nvidia.
Many people are of the opinion that Nvidia has always had better performance than AMD/ATI.
Tikcus9666 - Friday, September 19, 2014 - link
For desktop cards power consumption is meaningless to the 99%Price/Performance is much more important. if Card A uses 50w more under full load than card B, but performs around the same and is £50 cheaper to buy at 15 p per kwh cost for energy it would take 6666 hours of running to get your £50 back. Add to this if Card A produces more heat into the room, in winter months your heating system will use less energy, meanning it takes even longer to get your cash back.... tldr Wattage is only important in laptops and tablets and things that need batterys to run
jwcalla - Friday, September 19, 2014 - link
At least in this case it appears the power efficiency allows for a decent overclock. So you can get more performance and heat up your room at the same time.Of course I'm sure they're leaving some performance on the table for a refresh next year. Pascal is still a long way's off so they have to extend Maxwell's lifespan. Same deal as with Fermi and Kepler.
Icehawk - Friday, September 19, 2014 - link
When I built my mATX current box one criteria was that it be silent, or nearly so while still being a full power rig (i7 OC'd & 670), and the limitation really is GPU draw - thankfully NVs had dropped by the 6xx series enough I was able to use a fanless PSU and get my machine dead silent. I am glad I don't need a tower box that sounds like a jet anymore :)I would love to see them offer a high TDP, better cooled, option though for the uber users who won't care about costs, heat, sound and are just looking for the max performance to drive those 4k/surround setups.
Yojimbo - Friday, September 19, 2014 - link
I agree that power consumption in itself isn't so important to most consumer desktop users, as long as they don't require extra purchases to accommodate the cards. But since power consumption and noise seem to be directly related for GPUs, power efficiency is actually an important consideration for a fair number of consumer desktop users.RaistlinZ - Sunday, September 21, 2014 - link
Yeah, but they're still limited by the 250W spec. So the only way to give us more and more powerful GPU's while staying within 250W is to increase efficiency.kallogan - Friday, September 19, 2014 - link
dat beastjmunjr - Friday, September 19, 2014 - link
Wish you had done a GTX 970 review as well like many other sites since way more of us care about that card than the 980 since it is cheaper.Gonemad - Friday, September 19, 2014 - link
Apparently, if I want to run anything under the sun in 1080p cranked to full at 60fps, I will need to get me one GTX 980 and a suitable system to run with it, and forget mid-ranged priced cards.That should put an huge hole in my wallet.
Oh yes, the others can run stuff at 1080p, but you have to keep tweaking drivers, turning AA on, turning AA off, what a chore. And the milennar joke, yes it RUNS Crysis, at the resolution I'd like.
Didn't, by any chance, the card actually benefit of being fabricated at 28nm, by spreading its heat over a larger area? If the whole thing, hipothetically, just shrunk to 14nm, wouldn't all that 165W of power would be dissipated over a smaller area (1/4 area?), and this thing would hit the throttle and stay there?
Or by being made smaller, it would actually dissipate even less heat and still get faster?
Yojimbo - Friday, September 19, 2014 - link
I think that it depends on the process. If Dennard scaling were to be in effect, then it should dissipate proportionally less heat. But to my understanding, Dennard scaling has broken down somewhat in recent years, and so I think heat density could be a concern. However, I don't know if it would be accurate to say that the chip benefited from the 28nm process, since I think it was originally designed with the 20nm process in mind, and the problem with putting the chip on that process had to do with the cost and yields. So, presumably, the heat dissipation issues were already worked out for that process..?AnnonymousCoward - Friday, September 26, 2014 - link
The die size doesn't really matter for heat dissipation when the external heat sink is the same size; the thermal resistance from die to heat sink would be similar.danjw - Friday, September 19, 2014 - link
I would love to see these built on Intel's 14nm process or even the 22nm. I think both Nvidia and AMD aren't comfortable letting Intel look at their technology, despite NDAs and firewalls that would be a part of any such agreement.Anyway, thanks for the great review Ryan.
Yojimbo - Friday, September 19, 2014 - link
Well, if one goes by Jen-Hsun Huang's (Nvidia's CEO) comments of a year or two ago, Nvidia would have liked Intel to manufacture their SOCs for them, but it seems Intel was unwilling. I don't see why they would be willing to have them manufacture SOCs and not GPUs being that at that time they must have already had the plan to put their desktop GPU technology into their SOCs, unless the one year delay between the parts makes a difference.r13j13r13 - Friday, September 19, 2014 - link
hasta que no salga la serie 300 de AMD con soporte nativo para directx 12Arakageeta - Friday, September 19, 2014 - link
No interpretation of the compute graphs whatsoever? Could you at least report the output of CUDA's deviceQuery tool?texasti89 - Friday, September 19, 2014 - link
I'm truly impressed with this new line of GPUs. To be able to acheive this leap on efficiency using the same transistor feature size is a great incremental achievement. Bravo TSMC & Nvidia. I feel comfortable to think that we will soon get this amazing 980 performance level on game laptops once we scale technology to the 10nm process. Keep up the great work.stateofstatic - Friday, September 19, 2014 - link
Spoiler alert: Intel is building a new fab in Hillsboro, OR specifically for this purpose...TheJian - Saturday, September 20, 2014 - link
http://blogs.nvidia.com/blog/2014/09/19/maxwell-an...Did I miss it in the article or did you guys just purposely forget to mention NV claims it does DX12 too? see their own blog. Microsoft's DX12 demo runs on ...MAXWELL. Did I just miss the DX12 talk in the article? Every other review I've read mentions this (techpowerup, tomshardware, hardocp etc etc). Must be that AMD Center still having it's effect on your articles ;)
They were running a converted elemental demo (converted to dx12) and Fable Legends from MS. Yet curiously missing info from this site's review. No surprise I guess with only an AMD portal still :(
From the link above:
"Part of McMullen’s presentation was the announcement of a broadly accessible early access program for developers wishing to target DX12. Microsoft will supply the developer with DX12, UE4-DX12 and the source for Epic’s Elemental demo ported to run on the DX12-based engine. In his talk, McMullen demonstrated Maxwell running Elemental at speed and flawlessly. As a development platform for this effort, NVIDIA’s GeForce GPUs and Maxwell in particular is a natural vehicle for DX12 development."
So maxwell is a dev platform for dx12, but you guys leave that little detail out so newbs will think it doesn't do it? Major discussion of dx11 stuff missing before, now up to 11.3 but no "oh and it runs all of dx12 btw".
One more comment on 980: If it's a reference launch how come other sites already have OC versions (IE, tomshardware has a Windforce OC 980, though stupidly as usual they downclocked it and the two OC/superclocked 970's they had to ref clocks...ROFL - like you'd buy an OC card and downclock them)? IT seems to be a launch of OC all around. Newegg even has them in stock (check EVGA OC version):
http://www.newegg.com/Product/Product.aspx?Item=N8...
And with a $10 rebate so only $559 and a $5 gift card also.
"This model is factory overclocked to 1241 MHz Base Clock/1342 MHz Boost Clock (1126 MHz/1216 MHz for reference design)"
Who would buy ref for $10 diff? IN fact the ref cards are $569 at newegg, so you save buying the faster card...LOL.
cactusdog - Saturday, September 20, 2014 - link
TheJian, Wow, Did you read the article? Did you read the conclusion? AT says the 980 is "remarkable" , "well engineered", "impeccable design" and has "no competition" They covered almost all of Nvidia marketing talking points and you're going to accuse them of a conspiracy? Are you fking retarded??Daniel Egger - Saturday, September 20, 2014 - link
It would be nice to rather than just talk about about the 750 Ti to also include it in comparisons to see it clearer in perspective what it means to go from Maxwell I to Maxwell II in terms of performance, power consumption, noise and (while we are at it) performance per Watt and performance per $.Also where're the benchmarks for the GTX 970? I sure respect that this card is in a different ballpark but the somewhat reasonable power output might actually make the GTX 970 a viable candidate for an HTPC build. Is it also possible to use it with just one additional 6 Pin connector (since as you mentioned this would be within the specs without any overclocking) or does it absolutely need 2 of them?
SkyBill40 - Saturday, September 20, 2014 - link
As was noted in the review at least twice, they were having issues with the 970 and thus it won't be tested in full until next week (along with the 980 in SLI).MrSpadge - Saturday, September 20, 2014 - link
Wow! This makes me upgrade from a GTX660Ti - not because of gaming (my card is fast enough for my needs) but because of the power efficiency gains for GP-GPU (running GPU-Grid under BOINC). Thank you nVidia for this marvelous chip and fair prices!jarfin - Saturday, September 20, 2014 - link
i still CANT understand amd 'uber' option.its totally out of test,bcoz its just 'oc'd' button,nothing else.
its must be just r290x and not anantech 'amd canter' way uber way.
and,i cant help that feeling,what is strong,that anatech is going badly amd company way,bcoz they have 'amd center own sector.
so,its mean ppl cant read them review for nvidia vs radeon cards race without thinking something that anatech keep raden side way or another.
and,its so clear thats it.
btw
i hope anantech get clear that amd card R9200 series is just competition for nvidia 90 series,bcoz that every1 kow amd skippedd 8000 series and put R9 200 series for nvidia 700 series,but its should be 8000 series.
so now,generation of gpu both side is even.
meaning that next amd r9 300 series or what it is coming amd company battle nvidia NEXT level gpu card,NOT 900 series.
there is clear both gpu card history for net.
thank you all
p.s. where is nvidia center??
Gigaplex - Saturday, September 20, 2014 - link
Uber mode is not an overclock. It's a fan speed profile change to reduce thermal throttling (underclock) at the expense of noise.dexgen - Saturday, September 20, 2014 - link
Ryan, Is it possible to see the average clock speeds in different tests after increasing the power and temperature limit in afterburner?And also once the review units for non-reference cards come in it would be very nice to see what the average clock speeds for different cards with and without increased power limit would be. That would be a great comparison for people deciding which card to buy.
silverblue - Saturday, September 20, 2014 - link
Exceptional by NVIDIA; it's always good to see a more powerful yet more frugal card especially at the top end.AMD's power consumption could be tackled - at least partly - by some re-engineering. Do they need a super-wide memory bus when NVIDIA are getting by with half the width and moderately faster RAM? Tonga has lossless delta colour compression which largely negates the need for a wide bus, although they did shoot themselves in the foot by not clocking the memory a little higher to anticipate situations where this may not help the 285 overcome the 280.
Perhaps AMD could divert some of their scant resources towards shoring up their D3D performance to calm down some of the criticism because it does seem like they're leaving performance on the table and perhaps making Mantle look better than it might be as a result.
Luke212 - Saturday, September 20, 2014 - link
Where are the SGEMM compute benchmarks you used to put on high end reviews?garadante - Sunday, September 21, 2014 - link
What might be interesting is doing a comparison of video cards for a specific framerate target to (ideally, perhaps it wouldn't actually work like this?) standardize the CPU usage and thus CPU power usage across greatly differing cards. And then measure the power consumed by each card. In this way, couldn't you get a better example ofgaradante - Sunday, September 21, 2014 - link
Whoops, hit tab twice and it somehow posted my comment. Continued:couldn't you get a better example of the power efficiency for a particular card and then meaningful comparisons between different cards? I see lots of people mentioning how the 980 seems to be drawing far more watts than it's rated TDP (and I'd really like someone credible to come in and state how heat dissipated and energy consumed are related. I swear they're the exact same number as any energy consumed by transistors would, after everything, be released as heat, but many people disagree here in the comments and I'd like a final say). Nvidia can slap whatever TDP they want on it and it can be justified by some marketing mumbo jumbo. Intel uses their SDPs, Nvidia using a 165 watt TDP seems highly suspect. And please, please use a nonreference 290X in your reviews, at least for a comparison standpoint. Hasn't it been proven that having cooling that isn't garbage and runs the GPU closer to high 60s/low 70s can lower power consumption (due to leakage?) something on the order of 20+ watts with the 290X? Yes there's justification in using reference products but lets face it, the only people who buy reference 290s/290Xs were either launch buyers or people who don't know better (there's the blower argument but really, better case exhaust fans and nonreference cooling destroys that argument).
So basically I want to see real, meaningful comparisons of efficiencies for different cards at some specific framerate target to standardize CPU usage. Perhaps even monitoring CPU usage over the course of the test and reporting average, minimum, peak usage? Even using monitoring software to measure CPU power consumption in watts (as I'm fairly sure there are reasonably accurate ways of doing this already, as I know CoreTemp reports it as its probably just voltage*amperage, but correct me if I'm wrong) and reported again average, minimum, peak usage would be handy. It would be nice to see if Maxwell is really twice as energy efficient as GCN1.1 or if it's actually much closer. If it's much closer all these naysayers prophesizing AMD's doom are in for a rude awakening. I wouldn't put it past Nvidia to use marketing language to portray artificially low TDPs.
silverblue - Sunday, September 21, 2014 - link
Apparently, compute tasks push the power usage way up; stick with gaming and it shouldn't.fm123 - Friday, September 26, 2014 - link
Don't confuse TDP with power consumption, they are not the same thing. TDP is for designing the thermal solution to maintain the chip temperature. If there is more headroom in the chip temperature, then the system can operate faster, consuming more power."Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE"
https://www.google.com/url?sa=t&source=web&...
NeatOman - Sunday, September 21, 2014 - link
I just realized that the GTX 980 has a TDP of 165 watts, my Corsair CX430 watt PSU is almost overkill!, that's nuts. That's even enough room to give the whole system a very good stable overclock. Right now i have a pair of HD 7850's @ stock speed and a FX-8320 @ 4.5Ghz, good thing the Corsair puts out over 430 watts perfectly clean :)Nfarce - Sunday, September 21, 2014 - link
While a good power supply, you are leaving yourself little headroom with 430W. I'm surprised you are getting away with it with two 7850s and not experiencing system crashes.ET - Sunday, September 21, 2014 - link
The 980 is an impressive feat of engineering. Fewer transistors, fewer compute units, less power and better performance... NVIDIA has done a good job here. I hope that AMD has some good improvements of its own under its sleeve.garadante - Sunday, September 21, 2014 - link
One thing to remember is they probably save a -ton- of die area/transistors by giving it only what, 1/32 double precision rate? I wonder how competitive in terms of transistors/area an AMD GPU would be if they gutted double precision compute and went for a narrower, faster memory controller.Farwalker2u - Sunday, September 21, 2014 - link
I am looking forward to your review of the GTX 970 once you have a compatible sample in hand.I would like to see the results of the Folding @Home benchmarks. It seems that this site is the only one that consistently use that benchmark in its reviews.
As a "Folder" I'd like to see any indication that the GTX 970, at a cost of $330 and drawing less watts than a GTX 780; may out produce both the 780 ($420 - $470) and the 780Ti ($600). I will be studying the Folding @ Home: Explicit, Single Precision chart which contains the test results of the GTX 970.
Wolfpup - Monday, September 22, 2014 - link
Wow, this is impressive stuff. 10% more performance from 2/3 the power? That'll be great for desktops, but of course even better for notebooks. Very impressed they could pulll off that kind of leap on the same process!They've already managed to significantly bump up the top end mobile part from GTX 680 -> 880, but within a year or so I bet they can go quite a bit higher still.
Oh well, it was nice having a top of the line mobile GPU for a while LOL
If 28nm hit in 2012 though, doesn't that make 2015 its third year? At least 28nm seems to be a really good process, vs all the issues with 90/65nm, etc., since we're stuck on it so long.
Isn't this Moore's Law hitting the constraints of physical reality though? We're taking longer and longer to get to progressively smaller shrinks in die size, it seems like...
Oh well, 22nm's been great with Intel and 28's been great with everyone else!
mesahusa - Tuesday, September 23, 2014 - link
Nvidia and AMD havent moved to 22 because they don't have the funding. Intel has tens of billions to blow away in R&D. Broadwells going to be released in 2015, and its 14nm.Hrel - Monday, September 22, 2014 - link
In light of Nvidia not even trying to reduce manufacturing nodes It would be really nice to see them go on the offensive in the price war. $300 for the GTX980, everything lower from there. Probably not now, but like, spring 2015, that'd be great! Make good and sure to wipe out all the hold outs (like myself) keeping their old cards because they still play everything they play on 1080p. Kinda, get all your customers caught up on hardware in the same upgrade time frame.Then when they finally do drop nodes they can focus on making every card they sell run games at 8K resolution.
Nfarce - Monday, September 22, 2014 - link
Hate to break the news to you, but if you want to game at high level (above 1080p), you need to pay at high level. There is nothing new about that in the entire history of PC enthusiast building and gaming either for those of us who remember making the "huge leap" from a 15" 1024x768 resolution CRT monitor to a whopping 19" 1600x1200 CRT monitor. At least not in the 20 years since I've been involved with it anyway.Besides all that, that's why GPU makers offer cards for different budgets. If you can't afford their top tier products, you can't afford to game top tier. Period and end of discussion.
tuxRoller - Monday, September 22, 2014 - link
It seems as though the big improvement nvidia has made is to enable cpu-level scheduling/dvfs granularity into their chip. However, once all cores are engaged it ends up using as much power as its predecessor (see tomshardware).What I really want to know is how much of this due to purely driver-level changes.
yhselp - Tuesday, September 23, 2014 - link
Exceptional design. The sad thing is that NVIDIA will take forever to release a 30 SMM Maxwell GPU and once it finally does, it would cost a ton; even later on when they release a "budget" version for an unreasonable price of around $650 it would be too late - the great performance potential of today wouldn't be so great tomorrow. Striving for and building amazing GPUs is the right way forward, not empowering the people with them is a crime. Whatever happened to $500 flagship products?Rhodie - Wednesday, September 24, 2014 - link
Just got a GTX970, and only latest Nvidia drivers will install for 9xx series cards it seems. Unfortunately the latest drivers totally screw up some programs that use CUDA, seem to hide itspresence from programs lile Xillisoft Video Convertor Ultimate:-/ No response of course from either Nvidia or Xillisoft regarding the problem. Wonder how many other programs the drivers break?
garadante - Thursday, September 25, 2014 - link
Geeze. Anandtech, do an updated best value graphics card list because since the launch of the 970/980 retailers are giving some serious price cuts to 770/780/780 Ti's. Newegg has a 780 for less than $300 after rebate and just a hair over $300 before rebate. I'm seeing 780 Ti's for ~$430 and 770s for ~$240. I am amazed to see price cuts this deep since I haven't seen them the last several generations and considering how overpriced these cards were. But while supplies last and prices hold/drop, this completely flips price/performance on it's head. I feel bad recommending an AMD 290 Tri-X to a friend a couple months back now. xDgaradante - Thursday, September 25, 2014 - link
Please do an updates best value graphics card* Where are my manners! D:jman9295 - Friday, September 26, 2014 - link
Newegg has an Asus DirectCU II GTX 780 selling in the $290 range after a mail in rebate, promo code and discount. It also comes with a pre-order copy of the new Borderlands game. That has to be the best value to performance GPU out right now. It is almost a full $100 less than the cheapest non-reference R9 290 on newegg and $40 less than the cheapest reference R9 290 which is crazy since this same Asus GTX 780 was selling for over $550 just last month with no free games (and still is on Amazon for some reason).mixer4x - Thursday, September 25, 2014 - link
I feel bad for having just bought the 290 tri-x just a month ago! =(I bought it because you never know when the new cards will be released and how much they will cost. Unfortunately, the new cards came out too soon!
garadante - Thursday, September 25, 2014 - link
Yeah. To be honest nobody except ardent Nvidia fanboys would've believed Nvidia would release cards as performance and price competitive as they did, especially the 970. The 980 is honestly a little overpriced compared to a few generations ago as they'll slap a $200 premium on it for Big Maxwell but $330 MSRP for the 970 (if I remember correctly) wasn't bad at all, for generally what, 290/780/290X performance?tuxRoller - Friday, September 26, 2014 - link
It's not too surprising as we saw what the 750ti was like.What is disappointing, though, is that I thought nvidia had made some fundamental breakthrough in their designs where, instead, it looks as though they "simply" enabled a better governor.
garadante - Friday, September 26, 2014 - link
It'll be interesting to see how the efficiency suffers once nvidia releases a proper compute die with area dedicated to double precision FP. I have to keep in mind that when factoring in the stripped down die compared to AMD's 290/290X cards, the results aren't as competition. Lowing as they first seem. But if AMD can't counter these cards with their own stripped down gaming only cards then nvidia took the win this generation.tuxRoller - Friday, September 26, 2014 - link
That's an excellent point. I take it you already read the tomshardware review? They're compute performance/W is still good, but not so unbelievable as their gaming performance, but I'm not sure it's b/c this is a gaming only card. Regardless, though, amd needs to offer something better than what's currently available. Unfortunately, I don't think they will be able to do it. There was a lot of driver work than went into making these maxwell cards humgaradante - Friday, September 26, 2014 - link
One thing that really bothers me though is how Anandtech keeps testing the 290/290X with reference cards. Those cards run at 95 C due to the fan control profile in the BIOS and I remember seeing that when people ran those cards with decent nonreference cooling in the 70 C range that power consumption was 15-20+ watts lower. So an AMD die that sacrifices FP64 performance to focus on FP32(gaming, some compute) performance as well as decreasing die size due to the lack of FP64 resources seems like it could be a lot more competitive with Maxwell than people are making it out to be. I have this feeling that the people saying how badly Maxwell trounces AMD's efficiency and that AMD can't possibly hope to catch up are too biased in their thinking.tuxRoller - Saturday, September 27, 2014 - link
Do you have a link to those reviews that show non-reference fans make gpus more efficient? I don't know how that could be possible. Given the temps we're looking at the effects on the conductors should be very, very small.Regarding the reduction in fp performance and gaming efficiency, that's a good point. That may indeed be part of the reason why nvidia has the gaming/compute split (aside from the prices they can charge).
garadante - Sunday, September 28, 2014 - link
Here's an example of a card with liquid cooling. Factor in the overclock that the nonreference card has and that it draws something like 20 watts less in Furmark and the same in 3Dmark. I could be mistaken on the improved power usage but I do recall seeing shortly after the 290X launch that nonreference coolers helped immensely, and power usage dropped as well. Sadly I don't believe Anandtech ever reviewed a nonreference 290X... which is mind boggling to consider, considering how much nonreference cooling helped that card, even outside of any potential power usage decreases.garadante - Sunday, September 28, 2014 - link
http://www.tomshardware.com/reviews/lcs-axr9-290x-... Whoops, forgot the link.jman9295 - Friday, September 26, 2014 - link
I wonder why they still give these cards these boring numbered names like GTX 980. Except for the Titan, these names kinda suck. Why not at least name it the Maxwell 980 or for AMD's R( 290 series the Hawaii 290. That sounds a lot cooler than GTX or R9. Also, for the last several generations, AMD and Nvidia's numbering system seems to be similar up until AMD ended that with the R9/R7 200 series. Before that, they had the GTX 700 and HD 7000 series, the GTX 600 and HD 6000 series and so on. Then, as soon as AMD changed it up, Nvidia decides to skip the GTX 800's for retail desktop GPUs and jump right up to the 900 series. Maybe they will come up with a fancier name for their next gen cards besides the GTX 1000's.AnnonymousCoward - Saturday, September 27, 2014 - link
Naw, names are much harder to keep track of than numbers that inherently describe relative performance.Ash_ZA - Friday, October 3, 2014 - link
Instead of the VXDI animated tech demo of the moon landing they should of just used a studio to film it and passed it off as animated. ;)LadyGamer85 - Thursday, October 16, 2014 - link
How was he able to over volt to 1.25 ? I can only get 1.21 on mine :(onmybikedrunk - Monday, December 8, 2014 - link
I love this card. I have an SLI config and OC'd (very stable) to 1518Mhz Boost and an 8Ghz Memory Clock with EVGA's ACX 2.0 cooler! Incredible how well this card OC's. For the price to performance ratio you get, this is a steal!AxeMastersINC - Friday, February 27, 2015 - link
Does overclocking the 980 using the perimeters here shorten its life?