Comments Locked

105 Comments

Back to Article

  • K_Space - Thursday, July 7, 2016 - link

    So hardly a drop in performance even with compatibility mode.... sheesh, a storm in a teamcup and all those fanboys from both sides make it sounds like the world is about to end.
  • K_Space - Thursday, July 7, 2016 - link

    that would be a tea cup*. Edit mode by xmas anyone?
  • Wreckage - Thursday, July 7, 2016 - link

    That's because they didn't fix the issue. They just transferred the over power draw from the motherboard to the 6 pin. They are still out of spec. Risking their customers systems just to cheat on benchmarks. Thankfully the 1060 is coming and people can upgrade to a safe reliable card.
  • Meteor2 - Friday, July 8, 2016 - link

    Or, you could say that AMD have cut power draw by 10% with less than a 3% cost to frame rate to deliver a safe reliable card. That's pretty impressive.
  • BurntMyBacon - Friday, July 8, 2016 - link

    @Wreckage: "That's because they didn't fix the issue. They just transferred the over power draw from the motherboard to the 6 pin. They are still out of spec."

    Did you read the article? They did fix the issue. It cost a grand total of 3% or less performance. They also allowed for a safer (than before) "Out of Spec" operational mode by transferring some of the load to a more tolerant in practice connector.

    As for reliability, this is not the first card released out of spec. People overclock their card to out of spec power consumptions frequently. Even factory overclocks from card manufacturers sometimes go out of spec. This one was more concerning because it was a reference design, but the problem has been rectified. By the way, that extra 1 or 2 watts (Do we seriously not know which it is?) could easily be accounted for by the test setup. I haven't checked, but I'll wait for some more sites to review it and confirm before worrying about 1W.
  • fanofanand - Friday, July 8, 2016 - link

    Stop being so moderate, everyone needs to be aggressively in one camp or the other. AMD is going to burn everyone's house down and murder their first born! Nvidia is the only safe way to game! Nvidia is gouging everyone, only AMD can save us from the tyranny of Intel/Nvidia! Please, work on it so you can be irrational like everyone else here (looking at you Wreckage).
  • BurntMyBacon - Friday, July 8, 2016 - link

    @fanofanand: "Stop being so moderate, everyone needs to be aggressively in one camp or the other."

    I'm sorry. :(
    I'll try to work on that. Here goes.

    Here's an article from Tom's for some more perspective (pcper posted below):
    http://www.tomshardware.com/reviews/amd-radeon-rx-...

    You'll have to do some reading to find this detail, but it appears that compatibility mode pulls 5.6A heated and 5.4A cold. Spec is 5.5A for reference. IF ONLY AMD HADN'T SKIMPED ON THE COOLER, THEY MIGHT HAVE AVOIDED RUNNING OUT OF SPEC AND BURNING DOWN THE HOUSES OF THE POOR, ELDERLY, AND VETERANS (should I throw children in as well?).

    Was that better?
  • fanofanand - Friday, July 8, 2016 - link

    MUCH better. :) Thank you for contributing to the zealotry!
  • nick85er - Friday, July 8, 2016 - link

    this was beautifully stated.

    On that note, the wife's system will have an RX490 (soon), and my monster will have a GTX1080 by Christmas. Hope VR is as uber as they make it out to seem - although the wife will be a bit jelly of my 28" 4Ks Hooray PC master race
  • GingerTea - Sunday, August 7, 2016 - link

    1080Ti, save your money, we don't want to end up like last generation stuck with 980's because we were too impulsive to wait for the ti.
    Given that the Titan X is out, they're honestly probably just waiting for the 480x 490x to come out so that they will be able to release an hbm2 part to compete. (I know it doesn't matter but it will matter to me, the consumer, since if I'm blowing $1500 CDN I want hbm2 so I don't feel bad when cheaper hbm2 cards come out next year).
    For real tho, wait 2 months past Christmas and I bet 1080 ti $800 USD MSRP, Titan XP performance.
  • miribus - Friday, July 8, 2016 - link

    If AMD connected all of the power and ground wires on their end, use a 6+2 connector instead of a 6 and you're within spec.
  • ACE76 - Monday, July 11, 2016 - link

    Just about every person who buys an enthusiast videocard overclocks them to some degree for performance...that immediately puts the card out of spec anyway...this whole thing was blown out of proportion
  • maccorf - Friday, July 8, 2016 - link

    Did AMD kill your dog or something?
  • Outlander_04 - Friday, July 8, 2016 - link

    Thats because their was no issue apart from in the minds of a couple of reviewers and some nvidia fans .
    PCI-e x 16 slots are electrically good for 300 watts , and the RX 480 was certified as it was.

    Once you also know that the same 6 wires of an 8 pin pci-e power connector are rated at 150 watts [ the other two do not carry current for thee card to consume] then you can see that this really is a problem only to the uninformed, and no ones motherboard was ever at risk
  • wolrah - Friday, July 8, 2016 - link

    "PCI-e x 16 slots are electrically good for 300 watts , and the RX 480 was certified as it was."

    Incorrect. A PCIe card can consume up to 300 watts and be within spec, but only 75 of that can come from the slot. Quoting a PCI-SIG presentation by an Intel engineer:

    A 300W add-in card can receive power by the following
    methods:
    * 75W from x16 PCIe connector plus 150W from a 2x4 connector plus 75W from a 2x3 connector.

    * 75W from x16 PCIe connector plus 75W from a first 2x3 connector, plus 75W from a second 2x3 connector, plus 75W from a third 2x3 connector.
    ** Note that this is not the preferred approach.

    There is a configuration parameter in the PCIe spec that lets the card tell the host it'll be using 300 watts and some AMD fanboys have run with that saying the spec allows 300 watts from the slot itself.
  • wolrah - Friday, July 8, 2016 - link

    You are of course technically correct that they're still violating the spec, but come on. That power's already coming from the power supply one way or another. The actual load on it doesn't change. All that changes is it's now being routed over a dedicated set of cables with big connectors designed to carry many amps rather than tiny edge connector pins.

    They went from violating the spec in a way that legitimately mattered to doing so in a way that really doesn't. It's like the difference between doing 90 MPH while weaving through heavy traffic and doing 90 MPH on an empty freeway at 3 AM. They're both equally against the applicable rules but one is clearly risky where the other one pretty much isn't.
  • ACE76 - Monday, July 11, 2016 - link

    Yeah, maybe your PSU can't handle the power draw but for 99.9% of other systems, this fix will do just fine.
  • HighTech4US - Thursday, July 7, 2016 - link

    The RX 480 is still non-compliant on the PCI-e connector even in compatibility mode (which is OFF by default).

    http://www.pcper.com/reviews/Graphics-Cards/AMD-Ra...

    Quote: With the original launch driver we saw the PEG slot pulling 6.8A or more, with the 6-pin pulling closer to 6.6A. On 16.7.1 the PEG slot draw rate drops to 6.1-6.2A. Again, that is still above the 5.5A rated maximum for the slot.

    Quote: Current still doesn’t make it down to 5.5A in our testing, but the PEG slot is now pulling 5.75A in our worst case scenario, more than a full amp lower than measured with the 16.6.2 launch driver

    So you have two choices: 10.9% or 4.5% non-compliant.
  • RussianSensation - Thursday, July 7, 2016 - link

    The 75W rating for 6-pin and 150W rating for the 8-pin are not true ratings.

    The six-pin connector uses two +12 V wires to carry up to 75 W, whereas the eight-pin connector uses three +12 V wires to carry up to 150 W. Although these figures are what the specifications allow, the wires and terminals of each connector are technically capable of handling much more power. Each pin in the PCI Express auxiliary power connectors is rated to handle up to 8 amps of current using standard terminals—more if using HCS or Plus HCS terminals. By counting the number of terminals, you can calculate the power-handling capability of the connector.

    PCIe power compliance as 6-pin and 8-pin are rated to draw much more power than their paper spec. R9 295X2 is only rated at 375W using the useless compliance rating but can draw 500W+ with ease without anything blowing up. I know because I've been running the card 100% loaded for months at a time without issues. 85-91W from the 6-pin connector is nothing.

    If you actually looked up the detailed specs for 6-pin graphics connector, you'd see that most modern PSUs made in the last 10 years can provide 192-288W of power for it:

    http://www.tomshardware.co.uk/power-supply-specifi...

    It's amazing how the entire Internet has now become 'experts' on power supplies and PCIe slots without actually doing any research on the topic.
  • Oxford Guy - Friday, July 8, 2016 - link

    "It's amazing how the entire Internet has now become 'experts' on power supplies and PCIe slots without actually doing any research on the topic."

    Why miss out on the opportunity to clutch one's pearls when it comes to AMD?

    Nonetheless, it seems evident that products should be reigned in to fit the stated specifications. If those are too conservative then they need to be changed.
  • fxv300 - Friday, July 8, 2016 - link

    my PSU 12Volt is rated as such +12V 42A = 504Watts
    I have been using the RX480 for a couple of days now and most power drawn I hav seen is about 40Watts.... check out my video here showing GPU-Z stats
    https://youtu.be/tyX7k3EbjPo
  • BlueBlazer - Saturday, July 9, 2016 - link

    GPUZ does not show the overall power consumption of the entire card. Also those tests (in the video) will not stress the GPU much at all.
  • The_Assimilator - Friday, July 8, 2016 - link

    *sigh*

    "The 75W rating for 6-pin and 150W rating for the 8-pin are not true ratings."

    Except that they are, because those are the power limits set by the PCIe specification. Someone could build a system with a motherboard that only ever allows up to 75W to be drawn from the PCIe slot, and pair it with a PSU that only ever allows up to 75W to be drawn from the 6-pin PEG connector. Both those components would be completely compliant with the specification, but if you tried to run an RX 480 in this system, the system would fail (perhaps spectacularly) because the card is not compliant - and this would be entirely AMD's fault.

    Now, you are correct that most people who buy RX 480 will have motherboards and power supplies that are overbuilt and will have no issue running the card. But that isn't the point! The point is that we have electrical specifications for a very good reason - so that our PC components actually work together without burning each other out, or worse.

    AMD has already played fast and loose with the PCIe spec back in the R9 295 X2 days, but that was mostly okay because the only people who bought those cards had monster PSUs and high-end motherboards with built-in protections. The RX 480 is targeted at a much wider audience and some of those will have computers that aren't as forgiving of specification violations - Dell and HP OEM systems come to mind, for a start. Tell me, if those users' computers fail when using an RX 480, is it their fault, or AMD's?

    Stop making excuses for AMD. They f**ked up, plain and simple, and if people like you keep defending them and buying their products instead of calling them out on their bulls**t, they won't bother to fix their f**kups going forward, which means consumers will continue getting substandard products. Are you happy with that? Because if you are, I've got a bridge to sell you.
  • fanofanand - Friday, July 8, 2016 - link

    Dell and HP OEM PC's typically do not have GPU's that use a PEG connector. Those that do will have power supplies that can handle it. The sky is not falling. AMD is certainly playing fast and loose, but no need to dramatize it.
  • Outlander_04 - Friday, July 8, 2016 - link

    https://www.reddit.com/r/Amd/comments/4rbw8p/facts...

    PCI-e x 16 slots are electrically good for 300 watts [ unless you put the computer in an oven ]
    So scary one might have to pull 80 watts
  • BlueBlazer - Saturday, July 9, 2016 - link

    Look further down the thread for comments debunking the OP's false claims. Actual specifications http://suddendocs.samtec.com/catalog_english/pcie.... says "2.2 A per pin (2 adjacent pins powered)". This means 1.1A per pin just like the standard Molex and PCI Express specifications. From Molex http://www.molex.com/pdm_docs/ps/PS-87715-200.pdf manufacturer follows exactly PCI Express specifications.
  • BlueBlazer - Friday, July 8, 2016 - link

    That's hovering near the maximum limits of PCI Express specifications. Still worse than all other graphic cards which draws way below that amount of current on average. Looks like the VRM configuration is main culprit.
  • emn13 - Friday, July 8, 2016 - link

    The spec allows a 8% margin of error on that figure, so that's actually compliant. Why would the spec even bother to mention 5.5A? No idea.
  • miribus - Friday, July 8, 2016 - link

    If AMD connected all of the power and ground wires on their end, use a 6+2 connector instead of a 6 and you're within spec.
  • Outlander_04 - Friday, July 8, 2016 - link

    https://www.reddit.com/r/Amd/comments/4rbw8p/facts...

    No issue now, no issue then . The card was certified
    You can stop worrying now
  • BlueBlazer - Saturday, July 9, 2016 - link

    Look further down the thread for comments debunking the OP's false claims. Actual specifications http://suddendocs.samtec.com/catalog_english/pcie.... says "2.2 A per pin (2 adjacent pins powered)". This means 1.1A per pin just like the standard Molex and PCI Express specifications. From Molex http://www.molex.com/pdm_docs/ps/PS-87715-200.pdf manufacturer follows exactly PCI Express specifications.
  • FourEyedGeek - Friday, July 8, 2016 - link

    I agree, however if the original release was the same as compatibility mode there wouldn't have been an issue at all. People with systems that could have handled it could overclock and get the benefits.
  • BurntMyBacon - Friday, July 8, 2016 - link

    @FourEyedGeek: "I agree, however if the original release was the same as compatibility mode there wouldn't have been an issue at all. People with systems that could have handled it could overclock and get the benefits."

    Yes. This is exactly how it should have been handled.
  • guachi - Thursday, July 7, 2016 - link

    That's... quite the power drop. My understanding is that not all (or even most) cards were drawing over spec, just some.

    Looks like the 470 and 460 should come in quite a bit lower in power.
  • extide - Thursday, July 7, 2016 - link

    Nop, it was all (ref) cards
  • eddman - Thursday, July 7, 2016 - link

    I'm impressed that AMD managed to release this driver so fast, and it does the job. Very nice support.

    The power connector is still overloaded though, but that's safer than overloading the MB. I'm not a fan of it either way.

    I don't understand why they didn't simply go with two 6-pin connectors or an 8-pin? Was it cost? Did they want to give the impression that it doesn't consume much power? OCing potential was also affected quite badly by this decision.
  • Deukish - Thursday, July 7, 2016 - link

    Having the 6/8-pin go over spec is a complete non-issue. Though the 6-pin is rated for 75w, extensive testing from multiple sources have shown that it can draw up to 150w for extended periods without any detrimental effects.

    Also most AIB cards are going to be using 8-pin connectors regardless. And considering the terrible reference cooler on the 480 OCing wouldn't be worthwhile regardless of if they went with 2 6-pins or an 8-pin.
  • eddman - Thursday, July 7, 2016 - link

    I've been aware of that for, I don't know, 7 or 8 years. You can draw more power from a 6-pin but standards are there for a reason. Again, the question is why AMD went with a single 6-pin?

    Aren't third-party coolers a thing? It's really not an excuse.
  • shabby - Thursday, July 7, 2016 - link

    They assumed the gpu was more efficient.
  • SunnyNW - Thursday, July 7, 2016 - link

    "Did they want to give the impression that it doesn't consume much power?" The answer is most likely Yes. Do not believe it has anything to do with cost. It has also been demonstrated that AMD has the 6pin connected as if it were an 8pin.
    Another very plausible explanation is that they did have a 110-120 watt card but decided to up the clocks, for performance reasons, after most everything was already finalized, so basically very late in the process.
  • prisonerX - Thursday, July 7, 2016 - link

    I think they figured it would fit into the power envelope and wanted the 6 pin to demonstrate that those with only that could use the card. If it has an 8 some people will say "oh I don't have an 8 it won't run on my PSU" or something along those lines.
  • fanofanand - Friday, July 8, 2016 - link

    I think you hit the nail on the head. Most lower-powered PSU's or older PSU's simply don't have an 8-pin, the casual user will simply look at that and say "I don't have that, I guess it's a 1060 for me".
  • RussianSensation - Friday, July 8, 2016 - link

    The issue here is that the 75W is not the right standard for the 6-pin cable itself. It's been explained a while back. The 75W spec is outdated as it was created in October 2004 and does not apply to modern PSUs.

    http://www.tomshardware.co.uk/power-supply-specifi...
  • BurntMyBacon - Friday, July 8, 2016 - link

    The spec has not been updated, but is still in use and therefore does apply to current PSUs. The spec is, however, not limiting on the sourcing side, only the drawing side. In other words, you are not allowed to create a device that requires more than the specified amount of power and still be spec compliant. You can, however, source as much power as you want assuming your distribution method is rated to handle it.

    The question is why PSU and motherboard manufacturers would go to the cost and effort of making a power distribution system that is more capable than the devices that will use it. Part of the answer is that the spec compliant connectors already have that capability, so the cost and effort are both low. It also gives them more margin (and fewer RMAs) to work with. Even if it is a niche, there is the assumption of overclocking which often pushes these devices out of spec. Overclockers are few in number, but they are often the people that others come to for advice on computer builds and make up a very vocal crowd on tech sites. The marginal extra cost may be justified indirectly by making sure they don't have issues with your product when they overclock (up to a reasonable point anyways). Factory overclocks are now common place as well and not every manufacturer makes sure the power circuitry is updated to stay spec compliant.
  • emn13 - Friday, July 8, 2016 - link

    A friend of mine builds cases, and apparently it's quite common for even basic things like plain old dimensions to be out of spec; i.e. if you build a case according the the maximum dimensions specified for ATX components many won't actually fit.

    It just doesn't surprise me that components are regularly out of spec; the whole industry clearly doesn't take it all that seriously.

    Not to mention that a current spec on a cable is a weird thing to even want to spec. As in: I can imagine wanting to specify the maximum power the *connector* should be able to handle, but that's going to be way, way higher. Similarly, I can imagine specifying the *minimum* current a power supply must be able to deliver to be atx compliant (though that's odd too). But conversely, specifying the maximum a component may draw leads to the consequence of components using many cables for no real reason other than compliance. If there really were a reason for the limited current spec; or if the 8-pin connector (or other hypothetical connectors) provided higher ratings, there's would be a spec-valid way to use the efficient number of cables. But there's not.

    See e.g. http://www.playtool.com/pages/psuconnectors/connec... which is mostly about ATX, but also analyzes PCIe power connectors: the ATX side of the story should be specced to 6-8A... *per circuit!* But the PCIe *total* current spec for all *3* circuits is less than 7A. So even with bog standard ATX circuits (as used e.g. for hard drives or the CPU), you'd expect this connector to be able to deliver at least 216W, and probably 288W.

    I mean, a 100W power supply is enough for lots of devices, certainly a 150W supply. Perhaps if you really standardize to the lowest common denominator device such as that, you might get this low spec with the absurd consequence that systems using more while remaining within the spec are likely to need(*) to use many, many cables - for no particularly good reason.

    (*) the 8-pin PCIe connector isn't quite as bad, but also oddly underspecced.
  • emn13 - Friday, July 8, 2016 - link

    Incidentally, because people use (and PSU manufacturers often sell) converters between the various connectors, it's probably dangerous for a PSU to underspec a connector. Not to mention that it may even be more expensive to do so, because it means a greater diversity of components, which possibly costs more than the minute savings thinner cables might allow.
  • Oxford Guy - Friday, July 8, 2016 - link

    "Again, the question is why AMD went with a single 6-pin?"

    1) The connector provides enough power.

    2) AMD doesn't want the card to cut into later higher tier products by giving the card's overclocking even more potential?

    3) More connectors = more production cost.
  • emn13 - Friday, July 8, 2016 - link

    Since in practice almost every 6-pin connector is probably designed to provide at least 216W (if using the same components as the HDD and/or CPU connectors) and likely considerably more, I doubt that overclocking headroom is really limited (assuming the overall total system draw is within the PSUs capability, of course). Last time I bought PSUs they came with converters between various connectors; so it's not even optional for them to support that kind of wattage.

    It's weird that all other components using similar connectors are specced to about 3 times the current per circuit, but well... if you're willing to spend 4500$ (https://pcisig.com/specifications/order-form#Top%2... PCI-SIG might let you in on that secret.
  • JasonMZW20 - Thursday, July 7, 2016 - link

    So, you're not a fan of overclocking either? Any overclocked CPU is drawing much more current than it should or is rated for (manufacturer 'spec' since you're so focused on specs). Thankfully, motherboard manufacturers gave us options on that front. And your motherboard isn't frying in the process.

    You know the -20 to +20% power targets actually DO affect power draw, as those options either lower or raise the upper power ceiling before the card throttles itself; though WattMan gives you MUCH finer granularity, so you can undervolt if the physical silicon is capable of it. Anyway, the reference board is just that - a point of reference. It's not made to OC, because if it was, it'd most certainly have an 8-pin.

    Like the AIB vendor cards that are coming soon ...
  • fanofanand - Friday, July 8, 2016 - link

    Stop making sense.
  • eddman - Friday, July 8, 2016 - link

    The point is that parts, when non-OCed, should ideally be within specs. I have OCed my CPU but I did it knowingly.

    The reference 480 is already on the verge of being out of spec or slightly above it in its stock state. In practice, it's not a problem exactly but standards are there for a reason for non-OCed stuff. If you OC it, it'll go even further above spec. There are reference cards out there from both companies that still fall within power specs even after OCing.

    SunnyNW makes a good point. Maybe AMD started with a lower power consumption where a 6-pin was fully within standards but then, in the last minute, realized that performance wasn't good enough and instead of making a new card, they up-clocked the already available cards to their max.

    Yes, for OCing consumers would be served better by custom cards but reference cards have usually been very OC friendly.

    Most PSUs can easily handle out-of-spec power draws, so probably the majority (almost all?) users shouldn't have any problems at all, but it's still a bit of an ugly situation.

    I would personally go with a 480 with an 8-pin or two 6-pin connectors.
  • RaistlinZ - Thursday, July 7, 2016 - link

    So basically gamers have three choices: 1) Your PCIE being out of spec, 2) Your 6-pin power being out of spec, or 3) Reduce performance. This should never had been an issue in the first place. This is still a failure on their part.
  • bigboxes - Thursday, July 7, 2016 - link

    Or it's not an issue for the majority of users with a quality PSU. RTA.
  • Eugene86 - Friday, July 8, 2016 - link

    Quality PSU? Are we talking about the same user base here? The people buying the 480 are trying to get the best "value" for their money, unsuccessfully in this case. You really think they're going to be spending extra money on a "quality" PSU? These are the same people who can't afford an extra $50 for a higher end card.
  • miribus - Friday, July 8, 2016 - link

    6+2 connector instead of a 6 and you're within spec. If your card is installed look at it, if you see 3 yellow wires going into it, calm down, if it were something else it'd be drawing 150W now and be within spec. If it doesn't change to a 6+2 cable and ignore the +2. The "out of spec" driver overrides the 75W limit but that connector was rated for 150W. The 480 won't draw close to that.
  • Nagorak - Saturday, July 9, 2016 - link

    $200 is actually quite a bit for a graphics card. I'm not sure those who are spending in that price range are going to be getting the cheapest PSU possible. You can buy a decent PSU for $50 or so. The people who are cutting corners with the cheapest PSU possible are probably buying $100 graphics cards or have such a weak system they can't game on it anyway.
  • ACE76 - Monday, July 11, 2016 - link

    I just built my Skylake 660k system and my Corsair 850 watt PSU only cost me $100...I have an 8gb RX480 in my system....not a cheap system at all.
  • K_Space - Thursday, July 7, 2016 - link

    My shiny new card is running an average of 0.4 fps slower. DAMN you AMD!!
    Please!
  • Cygni - Thursday, July 7, 2016 - link

    It really wouldn't be the first card to go over spec on the 6-pin, ya know...
  • Drumsticks - Thursday, July 7, 2016 - link

    To be fair, performance barely drops (3%), but it's saving 20W (over 10%) on power. That also makes the efficiency game more interesting. If they can bring power down to that level, they're a good bit closer to Pascal than they were 7 days ago. Still not close, but certainly better.
  • hero4hire - Friday, July 8, 2016 - link

    Out of all the news this is the most relevant to me. I have a 970 and wasn't even considering a 480. I've been following Polaris news to see how it stacks to Pascal and if this efficiency (overclockability) is near even in a 490 or Fury pt2, it becomes a real option in the future. My $ will follow the best single card value. For all I know that could be a 1080Ti.

    Key take away is that Polaris isn't behind Pascal as much as was first indicated. It looks like AMD is pushing the 480s but the architecture is looking better when in the efficiency band. Aka 10% power for 3% perf. Now let's double or triple the ROPs and see what happens at $400.
  • prisonerX - Thursday, July 7, 2016 - link

    or 4) stop being a butthurt Nvidia fanboy
  • Michael Bay - Friday, July 8, 2016 - link

    Oh, it`s not nV users mending their backsides and proferring laughable excuses, no.
  • fanofanand - Friday, July 8, 2016 - link

    It is NV users who are levying fantastic prophecies of the sky falling and PSU's exploding left and right......AMD users (for the most part) are simply being realistic about what the actual issue is, and what the ramifications are. The short answer is, much ado about nothing.
  • jjj - Thursday, July 7, 2016 - link

    Testing at 1080p would have been preferable as that's the main target res.
  • wolfemane - Thursday, July 7, 2016 - link

    Agreed!
  • SunnyNW - Thursday, July 7, 2016 - link

    Yes. Why the testing at 1440 and not 1080?
  • D. Lister - Thursday, July 7, 2016 - link

    Perhaps they were trying to stress the GPU for a clearer picture. At 1080p, the performance disparity between the old and the new drivers would probably be even smaller.
  • Ryan Smith - Thursday, July 7, 2016 - link

    Correct.
  • nick85er - Friday, July 8, 2016 - link

    yup 1440p is a better way to stress the card's 8GB than 1080p - and draw max wattage.
  • nick85er - Friday, July 8, 2016 - link

    perhaps ultra 3160p is the only way to activate the self-destruct mode we've been hearing so much about?

    i demand a retest.
  • nick85er - Friday, July 8, 2016 - link

    dammit, was laughing too hard.. meant 2160p
  • SunnyNW - Thursday, July 7, 2016 - link

    I had read that the implementation of the power distribution on the card results in it pulling power from the PCIe slot for the memory. If this is indeed the case, then is there still going to be over-draw from the slot in the event of a memory overclock. I myself am not too worried but would it still present something similar to the initial problem for older boards?
  • prisonerX - Thursday, July 7, 2016 - link

    It's not the case. There is a power phase controller which can be configured to draw from each source as required, then some of the phases are used for memory and some for the GPU. The controller has likely been programmed to draw a max of 75w from the slot.
  • extide - Thursday, July 7, 2016 - link

    There are 6+1 phases. 6 for the GPU, and 1 for the memory. Three of the 6 GPU phases are hard-wired to the PCIe slot, and the other 3 to the PEG connector. They can't switch where each phase draws power from, but they can vary the duty cycle of each of the phases individually, so they can basically lower the duty cycle on the phases connected to the slot, and raise them on the other 3. The card's power supply is already pretty overbuilt as it is, o making 3 of the phases work harder is not a concern, I bet they could run the whole GPU off 3 or 4 phases if they wanted to.

    According to the diagram on PCPer the memory phase is wired to the PEG connector, not the PCIe slot, but in any case it cannot be changed in software at all, it would require a new PCB layout to change that.
  • D. Lister - Thursday, July 7, 2016 - link

    Good job. Now in a couple of days when the 1060 is released, we'll have a proper bout. No-shows are so boring.
  • barleyguy - Thursday, July 7, 2016 - link

    The 1060 won't be released in a couple of days. The GP106 isn't ready, and the GP104 is supply constrained by yields. They might make enough for reviewers just for a marketing bullet, but they won't be in the hands of customers. I'd bet a month at least, maybe longer.
  • HighTech4US - Thursday, July 7, 2016 - link

    How are those rose colored glasses working out for you?
  • silverblue - Friday, July 8, 2016 - link

    The rumour (i.e. SA) is that the 1060 will use a GP104 with fused-off sections as the GP106 isn't ready yet. Charlie isn't always correct (I think he covers himself with the name of his website, which should be called SemiAvailable in all fairness) but it's not long before we find out what GPU is actually on the card. If the GP104 is indeed present, it's a failed part which they've salvaged, however these usually use a bit more power than a true die created for the task at hand, which could be why the 1060's TDP is not lower - it's still two-thirds that of the 1080. It's not as if its clock speeds are any higher than its bigger brothers at this time.

    I doubt the 1060 will be delayed, and I imagine it'll have the GP106 after all; 11 days to find out.
  • Michael Bay - Friday, July 8, 2016 - link

    >demerjan
    >source
    >on anything nV
  • silverblue - Saturday, July 9, 2016 - link

    Yeah, he does rag on NVIDIA quite heavily, but we'll find out in... 10 days.
  • D. Lister - Friday, July 8, 2016 - link

    @barleyguy

    I was referring to the NDA, which should be lifted soon. At this performance tier, I personally am more interested in the competition, than the product itself.
  • SunnyNW - Thursday, July 7, 2016 - link

    NOOB question. If you are installing an RX 480 from say a 7850 do you have to do anything special in regards to driver installation? Already have the newest driver installed 16.7.1... Or is it simply plug-n-play?
  • extide - Thursday, July 7, 2016 - link

    Since you are going from AMD to AMD, you can install the new driver before or after. The safest way is to always uninstall the old driver BEFORE you take out the OLD card, and install the new driver AFTER you install the NEW card, especially when switching between AMD and nVidia, as otherwise the old drivers may not uninstall properly.
  • Nagorak - Saturday, July 9, 2016 - link

    Since they use the same drivers you should just switch the cards and everything should continue to work fine. If it doesn't work for some strange reason, just reinstall the drivers on top. Since you're not actually changing the driver there isn't much reason to worry about that.
  • Impulses - Thursday, July 7, 2016 - link

    At least they responded quickly and appropriately, in my book that counts for a lot... Still shouldn't have shipped the card as they did but their response was a positive in light of that negative.
  • Lolimaster - Friday, July 8, 2016 - link

    Guys I think the best solution is simply go to the wattman tab of the drivers and manually undervolt your gpu'.

    You're not limiting power, just fixing the overvoltage that AMD does in order to increase yields-
  • Lolimaster - Friday, July 8, 2016 - link

    In fact undervolting manually let the gpu stay more time in turbo speeds (1266mhz) which means higher performance.
  • Dangerous_Dave - Friday, July 8, 2016 - link

    I don't know why anyone bought a reference 480 anyway. Crappy cooler, crappy power circuitry, no overclocking potential whatsoever, possible motherboard issues. The custom 480s always looked much more promising, with possibly 1500MHz+ overclocking, no power issues and a decent cooler, all for a fairly small price premium.
  • Oxford Guy - Friday, July 8, 2016 - link

    People bought reference GTX 480s.
  • fanofanand - Friday, July 8, 2016 - link

    Several years ago they did. :P
  • watzupken - Friday, July 8, 2016 - link

    I think its possible for a higher clock with an 8 pin power. However, I feel the chip is not meant to be operate at high frequency like those from Nvidia. The result could be an exponential increase in power requirement as well as heat output if overclocked hard.
  • zodiacfml - Friday, July 8, 2016 - link

    Seems AMD went out of their way to provide the 480 the best possible performance for the price. I'm afraid that the GTX 1060 will trounce this in performance and efficiency. The question would be if the $50 premium of the Nvidia is worth it over this.

    I reckon the Nvidia has more value on smaller enclosures.
  • Anato - Friday, July 8, 2016 - link

    How is return current distributed? Voltage regulation is in positive side and they can change distribution like they did, but I can't see how they could change return currents to ground. And I doubt no one is measuring that.

    Although there is ample ground connectivity in 16x PCI-E, is it allowed to push more than 5.5A to motherboard?
  • miribus - Friday, July 8, 2016 - link

    Their VRM is specifically designed to shift load sources, they can apparently update the ratio via driver.
  • Anato - Saturday, July 9, 2016 - link

    Y that part I know. Basically there is 12V source -> VRM 0.8-1.5V -> Chip & RAM -> Ground. There is nothing between Chip and Ground. This is of course bit simplified, there is connections to bypass capacitors etc, but they don't affect return currents. As 12V feeds total of around 14A in it has to have 14A ground connection too and I don't think they can distribute ground currents with their design. Chip VRMs likely (certainly) feed one pool of chip voltage.

    In 16x PCI-E there is >60 ground connections. If these are rated around same as +12V you could sink >60A to 16x PCI-E 8). But I don't think this is allowed by standard and most certainly would fry something from motherboard.
  • extide - Friday, July 8, 2016 - link

    There are 6 phases to the GPU, 3 connected to PCIe slot, 3 connected to PEG connector. I am not exactly sure how the card is grounded, it could have a one big common ground, which is what I would expect, so this wouldn't be an issue. OR the phase's grounds could be connected to the same as where they are getting power (ie the 3 phases getting +12v from the PCIe are also grounded there, and the other 3 phases are grounded at PEG connector)

    They change the ratios by changing the duty cycles between the first 3 and the latter 3 phases. It would be cool to hook up a scope to those phases and you could see it.
  • Archie2085 - Friday, July 8, 2016 - link

    @Ryan. There is lot of news about 480 performing upto 5% better with lower voltage of 1.050v and on the plus side reporting 30w Saving compared to stock voltage.. Looks like Leakage is High with higher temps/ higher voltage?? Can you please test it with the full review ..?
  • RBFL - Sunday, July 10, 2016 - link

    I have an RX480 8GB which stock runs 1.075 at the maximum state. Running Passmark graphics test, driven by an i7, it peaked at 110W and typically ran at around 100W.
  • BrokenCrayons - Friday, July 8, 2016 - link

    This is one of those "eh, whatever" things that simply doesn't matter to me. I think the only point worth noting is that shedding ~19 watts of consumption results in an insignificant performance drop. It highlights that AMD is pushing Polaris pretty hard to wring everything they can out of it and that lower end cards that don't get as much attention and aren't clocked as high will probably do better in performance per watt. That's hopeful since the entire 14/16nm move has been rather disappointing so far from an electrical and heat perspective and that's been the case so far with BOTH of GPU companies. Here's to hoping low end and mobile parts are better in that regard.
  • miribus - Friday, July 8, 2016 - link

    If you have a 6+2 pin connector, use that instead of a 6 pin. It has all the power circuits and then you'd be well in spec. Of course that assumes AMD populated all of their power pins on the card, I have no idea, I don't have one. A multimeter would confirm it for you. If all of the 3 power pins on the card show no resistance, plug in a 6+2 power pin, ignoring the +2, and you'll have extra pins to share the load. Also, keep in mind the spec they are talking about is the PCIe spec, not the spec of the components in your system. Depending on quality they're not likely out of spec at all, just the mark the PCIe guys picked. The crimps, pins, wires, can typically go higher, hot rooms or very hot systems become a problem, though. The method above would solve that and nullify the problem entirely for anyone with such a supply (and AMD connecting all pins... I read somewhere that they did.) A simple ugly cable adapter would solve the 6-pin connector issue too (if you don't have a 6+2), bringing the connector into spec.

    Unless you want to really be pedantic about it sand say that because the +2 pins aren't connected it isn't allowed to carry the 150W, by the letter you're probably right, but I counter with: Look really hard at those +2 pins and notice the tiny wires? They aren't expected to carry any significant load.
  • extide - Friday, July 8, 2016 - link

    Yeah PSU wires are typically good for 8A sustained and easily twice that, or more, for quick bursts. The molex connector is what I would expect to fry first, but even those are rated for 6-8A per pin, I believe. So 3 pins * 6A * 12v = 216w at the minimum would be safe through a 6-pin PEG connector.
  • windozer - Saturday, July 9, 2016 - link

    Where'd AMD get this TDP number 150W, when it's actually more?
  • silverblue - Saturday, July 9, 2016 - link

    TDP is the thermal design power, not necessarily how much power the card will use, as far as I know. Regardless, with a 20W drop in power, is this 150W figure going to be a problem?
  • Hrel - Wednesday, July 13, 2016 - link

    You guys are gonna do all the benchmarking within spec right?

    Meaning, the lowest scores, compatibility mode?

    I would certainly think so, since doing a review based on a card that pushes either the PCI-E rail or PCI-E cable beyond spec is wildly dishonest.

    Seems like, when held to spec, the card is actually a bit slower in Crysis and FurMark. So if you do benchmarks in anything other than compatibility mode it should be clearly marked as overclocked and specifically stated, in large bold red font, in the article that this is not within design spec.

    Glad to see it's a not drastic impact on performance though.

    GTX 1060 numbers are, I'm sure, going to make $240 for that card look like a terrible deal.

Log in

Don't have an account? Sign up now