All in all, this doesn't really change the market all that much.
I still very firmly feel that the R9 290 right now (Q3 2014) remains the best price:performance of the mid to high end cards. That and the 4GB VRAM which may make it more future proof.
What really is interesting at this point is what AMD has to respond on Nvidia's Maxwell.
I Agree - Tonga is not bad, but on the other hand it does not change anything substantially compared to Tahiti. This would have been a nice result 1 - 1.5 years after the introduction of Tahiti. But that's almost been 3 years ago! The last time a GPU company showed no real progress after 3 years they went out of business shortly afterwards...
And seing how AMD brags to beat GTX760 almost makes cry. That's the double cut-down version of a 2.5 years old chip which is significantly smaller than Tonga! This is only a comparison because nVidia kept this card at a far too high price because there was no competitive pressure from AMD.
If this is all they have their next generation will get stomped by Maxwell.
Nope. But in the end the result performs just the same at even almost the same power consumption. Sure, there are some new features.. but so far and I expect for the foreseeable future they don't matter.
This is the first mid-range card to have all the value add features of the high-end cards. I wish AMD would leverage TrueAudio better, but the other features and the nice TDP drop.
The color compression enhancement is a very interesting feature. I think that in itself deserves a little applause because of its significance in the design and comparing to the 280's. I think this is more significant, not as a performance feature, but similar to what Maxwell represented for NV in terms of efficiency. Both are respectable design improvements, in different areas. It's a shame they don't cross-license... seems like such as waste.
Well, the TDP-drop is real, but mostly saves virtual power. By this I mean that 280 / 7950 never come close to using 250 W, and hence the savings from Tonga are far less than the TDP difference makes it seem. The average between different articles seems to be ~20 W saving at the wall and establishes about a power-efficiency parity with cards like GTX670.
The color compression could be Tongas best feature. But I still wonder: if Pitcairn on 270X comes so close to 285 and 280 performance with 256 bit memory bus and without color compression.. how much does it really matter (for 285)? To me it seems that Tahiti most often didn't need that large bus rather than color compression working wonders for Tonga. Besides, GTX770 and GTX680 also hold up fine at that performance level with a 256 bit bus.
The TDP drop is something I did not think about being a paper launch value. You make a good point about the color compression too. It will be interesting how both fair. That may be an interesting topic to follow up during the driver refresh.
As an owner of GTX 260 with a 448-bit bus, I can tell you that with anti-aliasing, it matters quite a bit as that becomes the limiter. The shader count is definitely not the limiter usually in the low-end and mid-range displays that these cards will typically be paired with. My GTX 260 and 1280x1024 monitor kind of illustrate that with 216 Shaders/896MB. :-)
It isn't pretty, but I don't see anything that forces me to upgrade yet. Think I've got two more generations or so to wait on before performance is significant enough, or a groundbreaking feature would do it. I'm actually considering upgrading out of boredom and interest in gimmicky features more than anything else at this point.
GTX 260 is like 6 years old now. It's lacking DX11, having less than 1 GB of (relatively slow) GDDR3 VRAM, and overall should be 3-4 times slower than R9 285 or R9 290, I guess.
I really didn't think anybody still uses these old gen cards (e.g. I have HD 7950 Boost Dual-X which is essentially identical to R9 280).
Because they would loose money! LOL. And they are both about the same anyway, Except AMD goes for brute force to get performance,(like using aV8) And Nvidia uses efficency with power. (Like a turbo charged 4cyl or 6cyl)
"And seing how AMD brags to beat GTX760 almost makes cry. That's the double cut-down version of a 2.5 years old chip which is significantly smaller than Tonga! This is only a comparison because nVidia kept this card at a far too high price because there was no competitive pressure from AMD."
You are being pretty silly here. Both AMD and Nvidia were rebranding a lot of cards these last few gens. You can'y go after AMD for rebranding a 2-3 year old chip, and then say its fine if nvidia does it and blame AMD's 'lack of competitive pressure'. If lack of competitive pressure was the reason for rebranding, then there was lack of competitive pressure on both sides.
And I highly doubt the 285 is 'all amd has'. this was just a small update to their product line, to bring some missing features (freesync, true audio etc...), and reduced power consumption to the 28x series. I'm sure there is a 3xx series coming down the road (or whatever they will call it). Both AMD and nvidia have been working been squeezing all they can out of older architecture for the past few years, you can't really put the blame on one of the other without being hypocritical.
The point is that Tonga is NOT a rebrand. It's a brand-new chip, AMD themselves call it the 3rd generation of GCN. Making a new chip costs AMD a sgnificant amount of money, that's why they haven't bothered yet to update Pitcairn to at least 2nd gen GCN (1.1). And I'm totally fine with that. It's also OK for nVidia to use GK104 for GTX760. What's not OK - from my point of view - is AMD investing into this new chip Tonga and hardly getting any real world benefit over the 3 year old Tahiti designs. If nVidia introduces a Maxwell which performs and costs them just the same as the previous Kepler, I'll call them out for this as well. But this is pretty much excluded, from what we've seen so far.
"And I highly doubt the 285 is 'all amd has'." It's their 3rd gen GCN architecture, as they say themselves. There's going to be a bigger chip using this architecture, but apart from that I doubt we'll see anything from AMD in the next year which is not yet in Tonga.
The one nice thing about the 285 is it will have resale value that has been lost on the 280-290 series thanks in large part to bit mining. There's a good chance that most won't feel that the 285 (and future incarnations) were run into the ground like the earlier ones were.
Nah, what's interesting is that Maxwell may not be worth "responding" too. It's an almost totally mobile focused design, one that's not even totally out yet. If these benchmarks hold true then it's very exciting for AMD's upcoming high end. Nvidia may end up with a 512bit bus as well, but AMD's bandwidth optimizations will mean a similarly specced card of their's will still handily beat anything NVIDIA has in terms of resolution scaling.
Heck it may even be enough to get a single GPU capable of running games at 4k at a reasonable fps. And that would be awesome. Maxwell might be good for Nvidia's mobile business, but I doubt it's going to help them take back the top spot for high end stuff from AMD.
UVD always supported vc-1. The first version supported full decode of h264 and vc-1. You are thinking of nvidia, who didn't have full hardware decode on a real desktop part until fermi.
One of the main advantages of first generation UVD (ATI Radeon HD2000 series) over Nvidia, was the full DXVA VLD support of both 1080p H.264 L4.1 (BluRay spec) and VC-1.
The article says that the Sapphire card has "1x DL-DVI-I, 1x DL-DVI-D, 1x HDMI, and 1x DisplayPort". Can you be more precise as to which versions of the spec are supported? Is it HDMI 1.4 or HDMI 2.0? I believe since this refers to MST, it's only HDMI 1.4 and a DisplayPort connection is required in MST mode for 4K@60Hz output?
Reading the recent GPU articles, I'm very puzzled why HDMI 2.0 adoption is still lacking in GPUs and displays, even though the spec has been out there for about a year now. Is the PC industry reluctant to adopt HDMI 2.0 for some (political(?), business(?)) reason? I have heard only bad things about DisplayPort 1.2 MST to carry a 4K@60Hz signal, and I'm thinking it's a buggy hack for a transitional tech period.
If the AMD newest next-gen graphics card only supports HDMI 1.4, that is mind-boggling. Please tell me I'm confused and this is a HDMI 2.0-capable release?
You can do 4K SST on both Nvidia and AMD-cards as long as they are DisplayPort 1.2 capable. It depends on your screen. There is no HDMI 600MHz on any graphics processor. Neither is their much of support from monitors or TVs as most don't do 600MHz.
Thanks! I was not actually aware that SST existed. I see here http://community.amd.com/community/amd-blogs/amd-g... that AMD is referring to SST as being the thing to fix up the 4K issue, although the people in the comments on that link refer that the setup is not working properly.
How do people generally see SST? Should one defer buying a new system now until proper HDMI 2.0 support comes along, or is SST+DisplayPort 1.2 already a glitch-free user experience for 4K@60Hz?
DP SST 4k/60Hz should be every bit as glitch free as proper hdmi 2.0 (be careful though with the latter since some 4k TVs claiming to accept 60Hz 4k resolutions over hdmi will only do so with ycbcr 4:2:0). DP SST has the advantage that actually even "old" gear on the graphic card side can do it (such as radeons from the HD 6xxx series - from the hw side, if it could do DP MST 4k/60Hz it should most likely be able to do the same with SST too, the reason why MST hack was needed in the first place is entirely on the display side). But if you're planning to attach your 4k TV to your graphic card a DP port might not be of much use since very few TVs have that.
I won't get another AMD video card until idle multimonitor consumption gets fixed. According to other websites, power consumption in such case increases substantially whereas NVidia video cards have almost the same consumption as when using a single display. In the case of the Sapphire 285 Dual-X it increases by almost 30W just by having a second display connected!!
I think Anandtech should start measuring idle power consumption when more than one display is connected to the video card / multimonitor configurations. It's an important information for many users who not only game but also need to have productivity needs.
This is only partly true. AMD cards nowadays can stay at the same clocks in multimon as in single monitor mode though it's a bit more limited than GeForces. Hawaii, Tonga can keep the same low clocks (and thus idle power consumption) up to 3 monitors, as long as they all are identical (or rather more accurately probably, as long as they all use the same display timings). But if they have different timings (even if it's just 2 monitors), they will clock the memory to the max clock always (this is where nvidia kepler chips have an advantage - they will stay at low clocks even with 2, but not 3, different monitors). Actually I believe if you have 3 identical monitors, current kepler geforces won't be able to stick to the low clocks, but Hawaii and Tonga can, though unfortunately I wasn't able to find the numbers for the geforces - ht4u.net r9 285 review has the numbers for it, sorry I can't post the link as it won't get past the anandtech forum spam detector which is lame).
A twin monitor configuration where the secondary display is smaller / has a lower resolution than the primary one is a very common (and logic) usage scenario nowadays and that's what AMD should sort out first. I'm positively surprised that on newer Tonga GPUs if both displays are identical frequencies remain low (according to the review you pointed out), but I'm not going to purchase a different display (or limit my selection) to get advantage of that when there's no need to with equivalent NVidia GPUs.
Fixing this is probably not quite trivial. The problem is if you reclock the memory you can't honor memory requests for display scan out for some time. So, for single monitor, what you do is reclock during vertical blank. But if you have several displays with different timings, this won't work for obvious reasons, whereas if they have identical timings, you can just run them essentially in sync, so they have their vertical blank at the same time. I don't know how nvidia does it. One possibility would be a large enough display buffer (but I think it would need to be in the order of ~100kB or so, so not quite free in terms of hw cost).
I used multimonitor with AMD & NVIDIA cards. I would take that 30W hit if it means working well. NVIDIA: too aggressive with low power mode, if you have video on one screen & game on the other, it will remain at the clock speed of the 1st event (if you start the video before the game loading, it will be stuck at the video clocks).
I used 780TI currently, R9 290x I had previously works better where it will always clock up...
I would like to note that if memory compression is effective, it should not only improve bandwidth but also reduce the need for texture memory. Maybe 2GB with compression is closer to 3GB in practice, at least if the ~40% compression advantage is true.
Obviously, there is no way to predict the future, but I think your conclusion concerning 2GB boards should take compression in account.
If GCN1.2 (instead of a GCN 2.0) is what AMD has to offer as the new arquitecture for their next year cards, Maxwell (based in 750Ti x 260X tests), will punch hard AMD in terms of performance per watt and production cost (not price) so their net income.
I'm afraid this won't be enough (but hope it does). Anyway, as Nvidia is expected to launch their Maxwell 256 bits card nearby, we'll have the answer soon.
"something of a lateral move for AMD, which is something we very rarely see in this industry"
Really? Seems like the industry has been rebadging for the last two release cycles. How about starting to test and show results on 4k screens? 60Hz ones are only $500 now, and that will put a little pressure on the industry to stop coasting. I have no intention of spending money on a minor bump up in specs. Bitcoin mining demand can't last forever.
But on this matter, about Mantle, maybe a slower processor could show less of that performance drop or even keep performing better than DirectX11. Maybe one more test on a slower core, on an FX machine?
What were AMD thinking? How can the 285 be a replacement for the 280, given its reduced VRAM, while at the same time AMD is pushing Mantle? Makes no sense at all.
Despite driver issues, I'd kinda gotten used to seeing AMD be better than NVIDIA in recent times for VRAM capacity. A new card with only 2GB is a step backwards. All NVIDIA has to do is offer a midrange Maxwell with a minimum 4GB and they're home. No idea if they will. Time will tell.
There you have it, and with no issues with boost. Sorry Chizow, so much for that. :P
Thanks for the in-depth review, Ryan. It appears that power consumption is going to vary from implementation to implementation. Lacking a reference model makes it tricky. Another review I read compared Gigabyte's Windforce OC 285 to a similarly mild OC'd 280, finding a substantial difference in the 285's favor.
No issues with Boost once you slap a third party cooler and blow away rated TDP, sure :D
But just as I said, AMD's rated specs were bogus, in reality we see that:
1) the 285 is actually slower than the 280 it replaces, even in highly overclocked factory configurations (original point about theoretical performance, debunked) 2) the TDP advantages of the 285 go away, at 190W target TDP AMD trades performance for efficiency, just as I stated. Increasing performance through better cooling results in higher TDP, lower efficiency, to the point it is negligible compared to the 280.
Its obvious AMD wanted the 285 to look good on paper, saying hey look, its only 190W TDP, when in actual shipping configurations (which also make it look better due to factory OCs and 3rd party coolers), it draws power closer to the 250W 280 and barely matches its performance levels.
In the end one has to wonder why AMD bothered. Sure its cheaper for them to make, but this part is a downgrade for anyone who bought a Tahiti-based card in the last 3 years (yes, its nearly 3 years old already!).
I think the main reason for this card, was to bring things up to par when it comes to features. the 280 (and 280x), were rebadged older high end cards (7950, 7970), and whole these offered great value when it came to raw performance, they are lacking features that the rest of the 200 series will support, such as freesync and trueaudio, and this feature discrepancy was confusing to customers. It makes sense to introduce a new card that brings things up to par when it comes to features. I bet they will release a 285x to replace the 280x as well.
Marginal features alone aren't enough to justify this card's cost, especially in the case of FreeSync which still isn't out of proof-of-concept phase, and TrueAudio, which is unofficially vaporware status.
If AMD released this card last year at Hawaii/Bonaire's launch at this price point OR released it nearly 12 months late now at a *REDUCED* price point, it would make more sense. But releasing it now at a significant premium (+20%, the 280 is selling for $220, the 285 for $260) compared to the nearly 3 year old ASIC it struggles to match makes no sense. There is no progress there, and I think the market will agree there.
If it isn't obvious now that this card can't compete in the market, it will become painfully obvious when Nvidia launches their new high-end Maxwell parts as expected next month. The 980 and 970 will drive down the price on the 780, 290/X and 770, but the real 285 killer will be the 960 which will most likely be priced in this $250-300 range while offering better than 770/280X performance.
You just can't admit to being wrong. It maintains boost fine - end of story. That's what I disagreed with you on in the first place. No boost issues - the 290 series had thermal problems. Slapping a different cooler doesn't raise TDP, it just removes obstacles towards reaching that TDP. With an inadequate cooler, you're getting temp-throttled before you ever reach rated TDP. Ask Ryan, he'll set you straight.
On top of this, depending on which model 285 you test, some of them eat significantly less power than an equivalent 280. Not all board partners did an equal job. Look at different reviews of different models and you'll see different results. Also, performance is better than I figured it would be, and in most cases it is slightly faster than 280. Which again, I never figured would happen and never claimed.
Who cares what you disagreed with? The point you took issue was a corollary to the point I was making, which turned out to be true about the theoreticals being misstated and inaccurate when basing any conclusion about the 285 being faster than the 280.
As we have seen:
1) The 285 barely reaches parity but in doing so, it requires a significant overclock which forces it to blow past its rated 190W TDP and actually draws closer to the 250W TDP of the 280.
2) It requires a 3rd party cooler similar to the one that was also necessary in keeping the 290/X temps in check in order to achieve its Boost clocks.
As for Ryan setting me straight, lmao, again, his temp tests and subtext already prove me to be correct:
"Note that even under FurMark, our worst case (and generally unrealistic) test, the card only falls by less than 20Mhz to 900MHz sustained."
So it does indeed throttle down to 900MHz even with the cap taken off its 190W rated TDP and a more efficient cooler. *IF* it was limited to a 190W hard TDP target *OR* it was forced to use the stock reference cooler, it is highly likely it would indeed have problems maintaining its Boost, just as I stated. AMD's reference specs trade performance for efficiency, once performance is increased that efficiency is reduced and its TDP increases.
Look, I get it, you're an Nvidia fanboy. But at least you admitted you were wrong, in your own way, finally. It sustains boost fine. Furmark makes a lot of cards throttle - including Maxwell! Whoops! Should we start saying that Maxwell can't hold boost because it throttles in Furmark? No, because that would be idiotic. I think Maxwell is a great design.
However, so is Tonga. Read THG's review of the 285. Not only does it slightly edge out the 280 on average performance, but it uses substantially less power. Like, 40W less. I'm not sure what's Sapphire (the card reviewed here) is doing wrong exactly - the Gigabyte Windforce OC is fairly miserly and has similar clocks.
LOL Nvidia fanboy, that's rich coming from the Captain of the AMD Turd-polishing Patrol. :D
I didn't admit I was wrong, because my statement to any non-idiot was never dependent on maintaining Boost in the first place, but I am glad to see not only is 285 generally slower than the 280 without significant overclocks, it still has trouble maintaining Boost despite higher TDP than the rated 190W and a better than reference cooler.
You could certainly say Maxwell doesn't hold boost because it throttles in Furmark, but that would prove once and for all you really have no clue what you are talking about since every Nvidia card they introduced since they invented Boost has no problems whatsoever hitting their rated Boost speeds even with the stock reference blower designs. The difference of course, is that Nvidia takes a conservative approach to their Boost ratings that they know all their cards can hit under all conditions, unlike AMD which takes the "good luck/cherry picked" approach (see: R290/290X launch debacle).
And finally about other reviews lol, for every review that says the 285 is better than the 280 in performance and power consumption, there is at least 1 more that echo the sentiments of this one. The 285 barely reaches parity and doesn't consume meaningfully less power in doing so. But keep polishing that turd! This is an ASIC only a true AMD fanboy could love some 3 years after the launch of the chip it is set to replace.
Oh and just to prove I can admit when I am wrong, you are right, Maxwell did throttle and fail to meet its Boost speeds for Furmark, but these are clearly artificially imposed driver limitations as Maxwell showed it can easily OC to 1300MHz and beyond:
Regardless, any comparisons of this chip to Maxwell are laughable, Maxwell introduced same performance at nearly 50% reduction in TDP or inversely, nearly double the performance at the same TDP all at a significantly reduced price point on the same process node.
What does Tonga bring us? 95-105% of R9 280's performance at 90% TDP and 120% of the price almost 3 years later? Who would be happy with this level of progress?
Nvidia is set to introduce their performance midrange GM104-based cards next week, do you think ANYONE is going to draw parellels between those cards and Tonga? We already know what Maxwell is capable of and it set the bar extremely high, so if GTX 970 and 980 come anywhere close to those increases in performance and efficiency, this part is going to look even worst than it does now.
You were wrong about it being unable to hold boost, you claimed that GCN 1.1 can't hold boost despite clear evidence to the contrary. Silly. Then you were wrong about Maxwell and Furmark - though you kind of admitted you were wrong.
Regarding that being a "driver limitation" you can clock a GPU to the moon, and it's fine until it gets a heavy load. However most users won't even know they're being throttled. I had this same discussion YEARS ago with a Pentium 4 guy. You can overclock all you want - when you load it up heavy, it's a whole new game. In that case the user never noticed until I showed him his real clocks while running a game.
Tonga averages a few % higher performance, dumps less heat into your case, and uses less power. Aside from this Sapphire Dual X, most 285 cards seem to use quite a bit less power, run cool and quiet. With all that being said, I think the 280 and 290 series carry a much better value in AMD's lineup. I'm certainly not a fanboy, you're much closer to claiming that title. I've actually used mostly Nvidia cards over the years. I've also used graphics from 3dfx, PowerVR, and various integrated solutions. My favorite cards over the years were a trusty Kyro II and a GeForce 3 vanilla, which was passively cooled until I got ahold of it. Ah those were the days.
No, I said being a GCN 1.1 part meant it was *more likely* to not meet its Boost targets, thus overstating its theoretical performance relative to the 280. And based on the GCN 1.1 parts we had already seen, this is true, it was MORE LIKELY to not hit its Boost targets due to AMD's ambiguous and non-guaranteed Boost speeds. None of this disproved my original point that the 285's theoreticals were best-case and the 280's were worst-case and as these reviews have shown, the 280 would be faster than the 285 in stock configurations. It took an overclocked part with 3rd party cooling and higher TDP (closer to the 280) for it to reach relative parity with the 280.
Tonga BARELY uses any less power and in some cases, uses more, is on par with the part it replaces, and costs MORE than the predecessor part it replaces. What do you think would happen if Nvidia tried to do the same later this week with their new Maxwell parts? It would be a complete and utter disaster.
Stop trying to put lipstick on a pig, if you are indeed as unbiased as you say you are you can admit there is almost no progress at all with this part and it simply isn't worth defending. Yes I favor Nvidia parts but I have used a variety in the past as well including a few highly touted ATI/AMD parts like the 9700pro and 5850. I actually favored 3dfx for a long time until they became uncompetitive and eventually bankrupt, but now I prefer Nvidia because much of their enthusiast/gamer spirit lives on and it shows in their products.
"if other GCN 1.1 parts like Hawaii are any indication, it's much more likely the 280 maintains its boost clocks compared to the 285 (due to low TDP limits)"
This is what you said. This is where I disagreed with you. The 285 maintains boost just as well as the 280. Further, GCN 1.1 Bonaire and even Hawaii reach and hold boost at stock TDP. The 290 series were not cooled sufficiently using reference coolers, but without any changes to TDP settings (I repeat, stock TDP) they boost fine as long as you cool them. GCN 1.1 boosts fine, end of story.
As far as Tonga goes, there's almost no progress in performance terms. In terms of power it depends on the OEM and I've seen good and bad. The only additions that really are interesting are the increased tessellation performance (though not terribly important at the moment) and finally getting TrueAudio into a mid-range part (it should be across the board by next gen I would hope - PS4 and XB1 have the same Tensilica DSPs).
I would hope they do substantially better with their future releases, or at least release a competent reference design that shows off power efficiency better than some of these third party designs.
Yes, and my comment was correct, it will ALWAYS be "more likely" the 280 maintains its boost over other GCN 1.x parts because we know the track record of GCN 1.0 cards and their conservative Boost compared to post-PowerTune GCN1.x and later parts as a result of the black eye caused by Hawaii. There will always be a doubt due to AMD's less-than-honest approach to Boost with Hawaii, plain and simple.
I also (correctly) qualified my statement by saying the low stated TDP of the 285 would be a hindrance to exceeding those rated specs and/or the performance of the 280, and we also see that is the case that in order to exceed those speed limits, AMD traded performance for efficiency to the point the 285's power consumption is actually closer to the 250W rated 280.
In any case, in another day or two, this unremarkable part is going to become irrelevant with GM104 Maxwell, no need to further waste any thoughts on it.
Speculating here. The data parallel instructions could be a way to share data between SIMD lanes. I could see this functionality being similar in functionality to what threadgroup local store allows, but without explicit usage of the local store.
It's possible this is an extension to, or makes new use of, the 32 LDS integer units in GCN. (section 2.3.2 in the souther islands instruction set docs)
It has to be artificially imposed, as AMD has already announced FirePro cards based on the Tonga ASIC that do not suffer from this castrated DP rate. AMD as usual taking a page from Nvidia's playbook, so now all the AMD fans poo-poo'ing Nvidia's sound business decisions can give AMD equal treatment. Somehow I doubt that will happen though!
Regarding the compression (delta color compression) changes for Tonga does this have any effect on the actual size of data stored in VRAM.
For instance if you take a 2gb Pitcarin card and a 2gb Tonga card showing the identical scene in a game will they both have identical (monitored) VRAM usage? Assuming of course the scenario here is neither is actually hitting the 2gb VRAM limit.
I'm wondering if it possible to test whether or not this is the case if unconfirmed.
VRAM usage will differ. Anything color compressed will take up less space (at whatever ratio the color compression algorithm allows). Of course this doesn't account for caching and programs generally taking up as much VRAM as they can, so it doesn't necessarily follow that overall VRAM usage will be lower on Tonga than Pitcairn. But it is something that can at least be tested.
I see Anand still don't understand the purpose of Mantle, if they did they wouldn't be using the most powerful CPU they could find, i would explain it to them but i think its already been explained to them a thousand times and they still don't grasp it.
Anand are a joke, they have no understanding of anything.
If Tonga is a referendum on Mantle, it basically proves Mantle is a failure and will never succeed. This pretty much shows most of what AMD said about Mantle is BS, that it takes LESS effort (LMAO) on the part of the devs to implement than DX.
If Mantle requires both an application update (game patch) from devs AFTER the game has already run past its prime shelf-date AND also requires AMD to release optimized drivers every time a new GPU is released, then there is simply no way Mantle will ever succeed in a meaningful manner with that level of effort. Simply put, no one is going to put in that kind of work if it means re-tweaking every time a new ASIC or SKU is released. Look at BF4, its already in the rear-view mirror from DICE's standpoint, and no one even cares anymore as they are already looking toward the next Battlefield#
Is this a joke or are you just new to the chipmaking industry? Maybe you should try re-reading the Wikipedia entry to understand GPUs are ASICs despite their more recent GPGPU functionality. GPU makers like AMD and Nvidia have been calling their chips ASICs for decades and will continue to do so, your pedantic objections notwithstanding.
But no need to take my word for it, just look at their own internal memos and job listings:
OK, I accept your arguments, but I still don't like this kind of terminology. To me, one may call things like fixed-function video decoder "ASIC" (for example UVD blocks inside Radeon GPUs), but not GPU as a whole, because people do GPGPU for a number of years on GPUs, and "General Purpose" in GPGPU contradicts with "Aplication Specific" in ASIC, isn't it? So, overall it's a terminology/naming issue; everyone uses the naming whatever he wants to use.
I think you are over-analyzing things a bit. When you look at the entire circuit board for a particular device, you will see each main component or chip is considered an ASIC, because each one has a specific application.
For example, even the CPU is an ASIC even though it handles all general processing, but its specific application for a PC mainboard is to serve as the central processing unit. Similarly, a southbridge chip handles I/O and communications with peripheral devices, Northbridge handles traffic to/from CPU and RAM and so on and so forth.
I reject the notion that we should be satisfied with a slower rate of GPU performance increase. We have more use than ever before for a big jump in power. 2560x1440@144Hz. 4K@60Hz.
Of course it's all good for me to say that without being a micro-architecture design engineer myself, but I think it's time for a total re-think. Or if the companies are holding anything back - bring it out now, please! :)
Process node shrinks are getting more and more difficult, equipment costs are rising, and the benefits of moving to a smaller node are also diminishing. So sadly I think we'll have to adjust to a more sedate pace in the industry.
I'm a longstanding AMD Radeon user for more than 10 years, but after reading this R9 285 review I can't help but think that, based on results of smaller GM107 in 750 Ti, GM204 in GTX 970/980 may offer much better performance/Watt/die area (at least for gaming tasks) in comparison to the whole AMD GPU lineup. Soon we'll see whether or not this will be the case.
BTW, is Tonga the only new GPU AMD has to offer in 2014? (if I'm not mistaken, the previous one from AMD, Hawaii, was released back in October 2013, almost a year ago) Does anybody know?
The thing is the moment I heard AMD explaining how Tonga was too new for current Mantle applications, I was like, "And there the other shoe is dropping."
The promise of low level API is that you get low level access and the developer gets more of the burden of carrying the optimizations for the game instead of a driver team. This is great for the initial release of the game and great for the company that wants to have less of a (or no) driver team, but it's not so great for the end user who is going to wind up getting new cards and needing that Mantle version to work properly on games no longer supported by their developer.
It's hard enough getting publishers and/or developers to work on a game a year or more after release to fix bugs that creep in and in some cases hard to get them to bother with resolution switches, aspect ratio switches, the option to turn off FXAA, the option to choose a software-based AA of your choice, or any of a thousand more doohickeys we should have by now as bog-standard. Can you imagine now relying on that developer--many of whom go completely out of business after finishing said title if they happen to work for Activision or EA--to fix all the problems?
This is why a driver team is better working on it. Even though the driver team may be somewhat removed from the development of the game, the driver team continues to have an incentive to want to fix that game going forward, even if it's a game no longer under development at the publisher. You're going to be hard pressed to convince Bobby Kotick at Activision that it's worth it to keep updating versions of games older than six months (or a year for Call of Duty) because at a certain point, they WANT you to move on to another game. But nVidia and AMD (and I guess Intel?) want to make that game run well on next gen cards to help you move.
This is where Mantle is flawed and where Mantle will never recover. Every time they change GCN, it's going to wind up with a similar problem. And every time they'll wind up saying, "Just switch to the DX version." If Mantle cannot be relied upon for the future, then it is Glide 2.0.
And why even bother at all? Just stick with DirectX from the get-go, optimize for it (as nVidia has shown there is plenty of room for improvement), and stop wasting any money at all on Mantle since it's a temporary version that'll rapidly be out of date and unusable on future hardware.
It's great that they've caught up with H.264 on hardware and the card otherwise looks fine. The bottom line for me, though, is that I don't see the point of buying card now without H.265 on hardware and an HDMI 2.0 port - 2 things Maxwell will bring this year. I haven't heard what AMDs timetable is there though.
It really irritates me that they are making these cards throttle to keep power and temps down! That is pathetic! If you can't make the thing right just don't make it! Even if it throttles .1mhz it should not be tolerated! We pay good money for this stuff and we should get what we pay for! It looks like the only AMD cards worth anything are the 270's and under. It stinks you have to go Nvidia to get more power! Because Nvidia really rapes people with their prices! But I must say the GTX 970 is priced great if it is still around $320. But AMD should have never even tried with this R9 285! First of all when you pay that much you should get more than 2GB. And another thing the card is pretty much limited to the performance of the R9 270's because of the V-Ram count! Yeah the 285 has more power than the 270's, But whats the point when you do not have enough V-Ram to take the extra power were you need a card like that to be? In other words if you are limited to 1080p anyway, Why pay the extra money when a R7 265 will handle anything at 1080p beautifully? This R9 285 is a pointless product! It is like buying a rusted out Ford Pinto with a V-8 engine! Yeah the engine is nice! But the car is a pos!
(QUOTE) So a 2GB card is somewhat behind the times as far as cutting edge RAM goes, but it also means that such a card only has ¼ of the RAM capacity of the current-gen consoles, which is a potential problem for playing console ports on the PC (at least without sacrificing asset quality).
(SIGH) So now even reviewers are pretending the consoles can outperform a mid range GPU! WOW! How about telling the truth like you did before you got paid off! The only reason a mid range card has problems with console ports is because they are no longer optimized! They just basically make it run on PC and say xxxx you customers here it is! And no the 8GB on the consoles are used for everything not for only V-Ram! We are not stupid idiots that fall for anything like the idiots in Germany back in the 1930's!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
86 Comments
Back to Article
CrazyElf - Wednesday, September 10, 2014 - link
All in all, this doesn't really change the market all that much.I still very firmly feel that the R9 290 right now (Q3 2014) remains the best price:performance of the mid to high end cards. That and the 4GB VRAM which may make it more future proof.
What really is interesting at this point is what AMD has to respond on Nvidia's Maxwell.
MrSpadge - Wednesday, September 10, 2014 - link
I Agree - Tonga is not bad, but on the other hand it does not change anything substantially compared to Tahiti. This would have been a nice result 1 - 1.5 years after the introduction of Tahiti. But that's almost been 3 years ago! The last time a GPU company showed no real progress after 3 years they went out of business shortly afterwards...And seing how AMD brags to beat GTX760 almost makes cry. That's the double cut-down version of a 2.5 years old chip which is significantly smaller than Tonga! This is only a comparison because nVidia kept this card at a far too high price because there was no competitive pressure from AMD.
If this is all they have their next generation will get stomped by Maxwell.
iLovefloss - Wednesday, September 10, 2014 - link
So all you got from this review is that Tonga is a cut down version of Tahiti? After reading this review, this is the impression you were left with?MrSpadge - Thursday, September 11, 2014 - link
Nope. But in the end the result performs just the same at even almost the same power consumption. Sure, there are some new features.. but so far and I expect for the foreseeable future they don't matter.Demiurge - Wednesday, September 10, 2014 - link
This is the first mid-range card to have all the value add features of the high-end cards. I wish AMD would leverage TrueAudio better, but the other features and the nice TDP drop.The color compression enhancement is a very interesting feature. I think that in itself deserves a little applause because of its significance in the design and comparing to the 280's. I think this is more significant, not as a performance feature, but similar to what Maxwell represented for NV in terms of efficiency. Both are respectable design improvements, in different areas. It's a shame they don't cross-license... seems like such as waste.
MrSpadge - Thursday, September 11, 2014 - link
Well, the TDP-drop is real, but mostly saves virtual power. By this I mean that 280 / 7950 never come close to using 250 W, and hence the savings from Tonga are far less than the TDP difference makes it seem. The average between different articles seems to be ~20 W saving at the wall and establishes about a power-efficiency parity with cards like GTX670.The color compression could be Tongas best feature. But I still wonder: if Pitcairn on 270X comes so close to 285 and 280 performance with 256 bit memory bus and without color compression.. how much does it really matter (for 285)? To me it seems that Tahiti most often didn't need that large bus rather than color compression working wonders for Tonga. Besides, GTX770 and GTX680 also hold up fine at that performance level with a 256 bit bus.
Demiurge - Thursday, September 11, 2014 - link
The TDP drop is something I did not think about being a paper launch value. You make a good point about the color compression too. It will be interesting how both fair. That may be an interesting topic to follow up during the driver refresh.As an owner of GTX 260 with a 448-bit bus, I can tell you that with anti-aliasing, it matters quite a bit as that becomes the limiter. The shader count is definitely not the limiter usually in the low-end and mid-range displays that these cards will typically be paired with. My GTX 260 and 1280x1024 monitor kind of illustrate that with 216 Shaders/896MB. :-)
It isn't pretty, but I don't see anything that forces me to upgrade yet. Think I've got two more generations or so to wait on before performance is significant enough, or a groundbreaking feature would do it. I'm actually considering upgrading out of boredom and interest in gimmicky features more than anything else at this point.
TiGr1982 - Thursday, September 11, 2014 - link
GTX 260 is like 6 years old now. It's lacking DX11, having less than 1 GB of (relatively slow) GDDR3 VRAM, and overall should be 3-4 times slower than R9 285 or R9 290, I guess.I really didn't think anybody still uses these old gen cards (e.g. I have HD 7950 Boost Dual-X which is essentially identical to R9 280).
P39Airacobra - Friday, January 9, 2015 - link
Because they would loose money! LOL. And they are both about the same anyway, Except AMD goes for brute force to get performance,(like using aV8) And Nvidia uses efficency with power. (Like a turbo charged 4cyl or 6cyl)bwat47 - Thursday, September 11, 2014 - link
"And seing how AMD brags to beat GTX760 almost makes cry. That's the double cut-down version of a 2.5 years old chip which is significantly smaller than Tonga! This is only a comparison because nVidia kept this card at a far too high price because there was no competitive pressure from AMD."You are being pretty silly here. Both AMD and Nvidia were rebranding a lot of cards these last few gens. You can'y go after AMD for rebranding a 2-3 year old chip, and then say its fine if nvidia does it and blame AMD's 'lack of competitive pressure'. If lack of competitive pressure was the reason for rebranding, then there was lack of competitive pressure on both sides.
And I highly doubt the 285 is 'all amd has'. this was just a small update to their product line, to bring some missing features (freesync, true audio etc...), and reduced power consumption to the 28x series. I'm sure there is a 3xx series coming down the road (or whatever they will call it). Both AMD and nvidia have been working been squeezing all they can out of older architecture for the past few years, you can't really put the blame on one of the other without being hypocritical.
MrSpadge - Thursday, September 11, 2014 - link
The point is that Tonga is NOT a rebrand. It's a brand-new chip, AMD themselves call it the 3rd generation of GCN. Making a new chip costs AMD a sgnificant amount of money, that's why they haven't bothered yet to update Pitcairn to at least 2nd gen GCN (1.1). And I'm totally fine with that. It's also OK for nVidia to use GK104 for GTX760. What's not OK - from my point of view - is AMD investing into this new chip Tonga and hardly getting any real world benefit over the 3 year old Tahiti designs. If nVidia introduces a Maxwell which performs and costs them just the same as the previous Kepler, I'll call them out for this as well. But this is pretty much excluded, from what we've seen so far."And I highly doubt the 285 is 'all amd has'."
It's their 3rd gen GCN architecture, as they say themselves. There's going to be a bigger chip using this architecture, but apart from that I doubt we'll see anything from AMD in the next year which is not yet in Tonga.
just4U - Friday, September 12, 2014 - link
The one nice thing about the 285 is it will have resale value that has been lost on the 280-290 series thanks in large part to bit mining. There's a good chance that most won't feel that the 285 (and future incarnations) were run into the ground like the earlier ones were.Frenetic Pony - Wednesday, September 10, 2014 - link
Nah, what's interesting is that Maxwell may not be worth "responding" too. It's an almost totally mobile focused design, one that's not even totally out yet. If these benchmarks hold true then it's very exciting for AMD's upcoming high end. Nvidia may end up with a 512bit bus as well, but AMD's bandwidth optimizations will mean a similarly specced card of their's will still handily beat anything NVIDIA has in terms of resolution scaling.Heck it may even be enough to get a single GPU capable of running games at 4k at a reasonable fps. And that would be awesome. Maxwell might be good for Nvidia's mobile business, but I doubt it's going to help them take back the top spot for high end stuff from AMD.
mindbomb - Wednesday, September 10, 2014 - link
UVD always supported vc-1. The first version supported full decode of h264 and vc-1. You are thinking of nvidia, who didn't have full hardware decode on a real desktop part until fermi.mindbomb - Wednesday, September 10, 2014 - link
Not that it matters really. It stopped being relevant when hd-dvd lost to bluray.Navvie - Thursday, September 11, 2014 - link
A lot of blu-rays have vc-1 content.nathanddrews - Thursday, September 11, 2014 - link
Blu-ray.com has a database that you can search by codec. VC-1 is very much alive and thriving.Ryan Smith - Wednesday, September 10, 2014 - link
According to my DXVA logs, the 280 did not support VC-1/WMV9. That is what I'm basing that on.mindbomb - Wednesday, September 10, 2014 - link
I think your logs are referring to the nvidia gtx 280, which did not support full vc-1 decode. AMD had it since the radeon 2600xt, which is ancient.NikosD - Saturday, September 13, 2014 - link
True.One of the main advantages of first generation UVD (ATI Radeon HD2000 series) over Nvidia, was the full DXVA VLD support of both 1080p H.264 L4.1 (BluRay spec) and VC-1.
felaki - Wednesday, September 10, 2014 - link
The article says that the Sapphire card has "1x DL-DVI-I, 1x DL-DVI-D, 1x HDMI, and 1x DisplayPort". Can you be more precise as to which versions of the spec are supported? Is it HDMI 1.4 or HDMI 2.0? I believe since this refers to MST, it's only HDMI 1.4 and a DisplayPort connection is required in MST mode for 4K@60Hz output?Reading the recent GPU articles, I'm very puzzled why HDMI 2.0 adoption is still lacking in GPUs and displays, even though the spec has been out there for about a year now. Is the PC industry reluctant to adopt HDMI 2.0 for some (political(?), business(?)) reason? I have heard only bad things about DisplayPort 1.2 MST to carry a 4K@60Hz signal, and I'm thinking it's a buggy hack for a transitional tech period.
If the AMD newest next-gen graphics card only supports HDMI 1.4, that is mind-boggling. Please tell me I'm confused and this is a HDMI 2.0-capable release?
Ryan Smith - Wednesday, September 10, 2014 - link
DisplayPort 1.2 and HDMI 1.4. Tonga does not add new I/O options.felaki - Wednesday, September 10, 2014 - link
Thanks for clarifying this!Penti - Wednesday, September 10, 2014 - link
You can do 4K SST on both Nvidia and AMD-cards as long as they are DisplayPort 1.2 capable. It depends on your screen. There is no HDMI 600MHz on any graphics processor. Neither is their much of support from monitors or TVs as most don't do 600MHz.felaki - Wednesday, September 10, 2014 - link
Thanks! I was not actually aware that SST existed. I see here http://community.amd.com/community/amd-blogs/amd-g... that AMD is referring to SST as being the thing to fix up the 4K issue, although the people in the comments on that link refer that the setup is not working properly.How do people generally see SST? Should one defer buying a new system now until proper HDMI 2.0 support comes along, or is SST+DisplayPort 1.2 already a glitch-free user experience for 4K@60Hz?
Kjella - Wednesday, September 10, 2014 - link
Got 3840x2160x60Hz using SST/DP and it's been fine, except UHD gaming is trying to kill my graphics card.mczak - Wednesday, September 10, 2014 - link
DP SST 4k/60Hz should be every bit as glitch free as proper hdmi 2.0 (be careful though with the latter since some 4k TVs claiming to accept 60Hz 4k resolutions over hdmi will only do so with ycbcr 4:2:0). DP SST has the advantage that actually even "old" gear on the graphic card side can do it (such as radeons from the HD 6xxx series - from the hw side, if it could do DP MST 4k/60Hz it should most likely be able to do the same with SST too, the reason why MST hack was needed in the first place is entirely on the display side).But if you're planning to attach your 4k TV to your graphic card a DP port might not be of much use since very few TVs have that.
Solid State Brain - Wednesday, September 10, 2014 - link
I won't get another AMD video card until idle multimonitor consumption gets fixed. According to other websites, power consumption in such case increases substantially whereas NVidia video cards have almost the same consumption as when using a single display. In the case of the Sapphire 285 Dual-X it increases by almost 30W just by having a second display connected!!I think Anandtech should start measuring idle power consumption when more than one display is connected to the video card / multimonitor configurations. It's an important information for many users who not only game but also need to have productivity needs.
Solid State Brain - Wednesday, September 10, 2014 - link
And of course, a comment editing function would be useful too.shing3232 - Wednesday, September 10, 2014 - link
well, AMD video card have to run higher frequency with multiscreen than with a single monitormczak - Wednesday, September 10, 2014 - link
This is only partly true. AMD cards nowadays can stay at the same clocks in multimon as in single monitor mode though it's a bit more limited than GeForces. Hawaii, Tonga can keep the same low clocks (and thus idle power consumption) up to 3 monitors, as long as they all are identical (or rather more accurately probably, as long as they all use the same display timings). But if they have different timings (even if it's just 2 monitors), they will clock the memory to the max clock always (this is where nvidia kepler chips have an advantage - they will stay at low clocks even with 2, but not 3, different monitors).Actually I believe if you have 3 identical monitors, current kepler geforces won't be able to stick to the low clocks, but Hawaii and Tonga can, though unfortunately I wasn't able to find the numbers for the geforces - ht4u.net r9 285 review has the numbers for it, sorry I can't post the link as it won't get past the anandtech forum spam detector which is lame).
Solid State Brain - Thursday, September 11, 2014 - link
A twin monitor configuration where the secondary display is smaller / has a lower resolution than the primary one is a very common (and logic) usage scenario nowadays and that's what AMD should sort out first. I'm positively surprised that on newer Tonga GPUs if both displays are identical frequencies remain low (according to the review you pointed out), but I'm not going to purchase a different display (or limit my selection) to get advantage of that when there's no need to with equivalent NVidia GPUs.mczak - Thursday, September 11, 2014 - link
Fixing this is probably not quite trivial. The problem is if you reclock the memory you can't honor memory requests for display scan out for some time. So, for single monitor, what you do is reclock during vertical blank. But if you have several displays with different timings, this won't work for obvious reasons, whereas if they have identical timings, you can just run them essentially in sync, so they have their vertical blank at the same time.I don't know how nvidia does it. One possibility would be a large enough display buffer (but I think it would need to be in the order of ~100kB or so, so not quite free in terms of hw cost).
PEJUman - Thursday, September 11, 2014 - link
I used multimonitor with AMD & NVIDIA cards. I would take that 30W hit if it means working well.NVIDIA: too aggressive with low power mode, if you have video on one screen & game on the other, it will remain at the clock speed of the 1st event (if you start the video before the game loading, it will be stuck at the video clocks).
I used 780TI currently, R9 290x I had previously works better where it will always clock up...
hulu - Wednesday, September 10, 2014 - link
The conclusions section of Crysis: Warhead seems to be copy-pasted from Crysis 3. R9 285 does not in fact trail GTX 760.thepaleobiker - Wednesday, September 10, 2014 - link
@Ryan - A small typo on the last page, last line of first paragraph - "Functionally speaking it’s just an R9 285 with more features"It should be R9 280, not 285. Just wanted to call it out for you! :)
Bring on more Tonga, AMD!
FriendlyUser - Wednesday, September 10, 2014 - link
I would like to note that if memory compression is effective, it should not only improve bandwidth but also reduce the need for texture memory. Maybe 2GB with compression is closer to 3GB in practice, at least if the ~40% compression advantage is true.Obviously, there is no way to predict the future, but I think your conclusion concerning 2GB boards should take compression in account.
Spirall - Wednesday, September 10, 2014 - link
If GCN1.2 (instead of a GCN 2.0) is what AMD has to offer as the new arquitecture for their next year cards, Maxwell (based in 750Ti x 260X tests), will punch hard AMD in terms of performance per watt and production cost (not price) so their net income.shing3232 - Wednesday, September 10, 2014 - link
750ti use a better 28nm process call HPM while rest of the 200 series use HPL , that's the reason why maxwell are so efficient.Spirall - Wednesday, September 10, 2014 - link
I'm afraid this won't be enough (but hope it does). Anyway, as Nvidia is expected to launch their Maxwell 256 bits card nearby, we'll have the answer soon.Samus - Wednesday, September 10, 2014 - link
Am I missing something or is this card slower than the 280...what the hell is going on?tuxRoller - Wednesday, September 10, 2014 - link
Yeah, you're missing something:)Samus - Thursday, September 11, 2014 - link
BF4 its about 4% slower.Kenshiro70 - Wednesday, September 10, 2014 - link
"something of a lateral move for AMD, which is something we very rarely see in this industry"Really? Seems like the industry has been rebadging for the last two release cycles. How about starting to test and show results on 4k screens? 60Hz ones are only $500 now, and that will put a little pressure on the industry to stop coasting. I have no intention of spending money on a minor bump up in specs. Bitcoin mining demand can't last forever.
tuxRoller - Wednesday, September 10, 2014 - link
Error page 4: chart should read "lower is better"jeffrey - Wednesday, September 10, 2014 - link
Saw this too, the Video Encode chart Ryan.Congrats btw!
Ryan Smith - Thursday, September 11, 2014 - link
Whoops. Thanks.yannigr2 - Wednesday, September 10, 2014 - link
Great article and review.The only bad news here I think is Mantle.
But on this matter, about Mantle, maybe a slower processor could show less of that performance drop or even keep performing better than DirectX11. Maybe one more test on a slower core, on an FX machine?
mapesdhs - Wednesday, September 10, 2014 - link
What were AMD thinking? How can the 285 be a replacement for the 280, given itsreduced VRAM, while at the same time AMD is pushing Mantle? Makes no sense at all.
Despite driver issues, I'd kinda gotten used to seeing AMD be better than NVIDIA in
recent times for VRAM capacity. A new card with only 2GB is a step backwards. All
NVIDIA has to do is offer a midrange Maxwell with a minimum 4GB and they're home.
No idea if they will. Time will tell.
Ian.
Alexvrb - Wednesday, September 10, 2014 - link
There you have it, and with no issues with boost. Sorry Chizow, so much for that. :PThanks for the in-depth review, Ryan. It appears that power consumption is going to vary from implementation to implementation. Lacking a reference model makes it tricky. Another review I read compared Gigabyte's Windforce OC 285 to a similarly mild OC'd 280, finding a substantial difference in the 285's favor.
chizow - Thursday, September 11, 2014 - link
No issues with Boost once you slap a third party cooler and blow away rated TDP, sure :DBut just as I said, AMD's rated specs were bogus, in reality we see that:
1) the 285 is actually slower than the 280 it replaces, even in highly overclocked factory configurations (original point about theoretical performance, debunked)
2) the TDP advantages of the 285 go away, at 190W target TDP AMD trades performance for efficiency, just as I stated. Increasing performance through better cooling results in higher TDP, lower efficiency, to the point it is negligible compared to the 280.
Its obvious AMD wanted the 285 to look good on paper, saying hey look, its only 190W TDP, when in actual shipping configurations (which also make it look better due to factory OCs and 3rd party coolers), it draws power closer to the 250W 280 and barely matches its performance levels.
In the end one has to wonder why AMD bothered. Sure its cheaper for them to make, but this part is a downgrade for anyone who bought a Tahiti-based card in the last 3 years (yes, its nearly 3 years old already!).
bwat47 - Thursday, September 11, 2014 - link
I think the main reason for this card, was to bring things up to par when it comes to features. the 280 (and 280x), were rebadged older high end cards (7950, 7970), and whole these offered great value when it came to raw performance, they are lacking features that the rest of the 200 series will support, such as freesync and trueaudio, and this feature discrepancy was confusing to customers. It makes sense to introduce a new card that brings things up to par when it comes to features. I bet they will release a 285x to replace the 280x as well.chizow - Thursday, September 11, 2014 - link
Marginal features alone aren't enough to justify this card's cost, especially in the case of FreeSync which still isn't out of proof-of-concept phase, and TrueAudio, which is unofficially vaporware status.If AMD released this card last year at Hawaii/Bonaire's launch at this price point OR released it nearly 12 months late now at a *REDUCED* price point, it would make more sense. But releasing it now at a significant premium (+20%, the 280 is selling for $220, the 285 for $260) compared to the nearly 3 year old ASIC it struggles to match makes no sense. There is no progress there, and I think the market will agree there.
If it isn't obvious now that this card can't compete in the market, it will become painfully obvious when Nvidia launches their new high-end Maxwell parts as expected next month. The 980 and 970 will drive down the price on the 780, 290/X and 770, but the real 285 killer will be the 960 which will most likely be priced in this $250-300 range while offering better than 770/280X performance.
Alexvrb - Thursday, September 11, 2014 - link
You just can't admit to being wrong. It maintains boost fine - end of story. That's what I disagreed with you on in the first place. No boost issues - the 290 series had thermal problems. Slapping a different cooler doesn't raise TDP, it just removes obstacles towards reaching that TDP. With an inadequate cooler, you're getting temp-throttled before you ever reach rated TDP. Ask Ryan, he'll set you straight.On top of this, depending on which model 285 you test, some of them eat significantly less power than an equivalent 280. Not all board partners did an equal job. Look at different reviews of different models and you'll see different results. Also, performance is better than I figured it would be, and in most cases it is slightly faster than 280. Which again, I never figured would happen and never claimed.
chizow - Friday, September 12, 2014 - link
Who cares what you disagreed with? The point you took issue was a corollary to the point I was making, which turned out to be true about the theoreticals being misstated and inaccurate when basing any conclusion about the 285 being faster than the 280.As we have seen:
1) The 285 barely reaches parity but in doing so, it requires a significant overclock which forces it to blow past its rated 190W TDP and actually draws closer to the 250W TDP of the 280.
2) It requires a 3rd party cooler similar to the one that was also necessary in keeping the 290/X temps in check in order to achieve its Boost clocks.
As for Ryan setting me straight, lmao, again, his temp tests and subtext already prove me to be correct:
"Note that even under FurMark, our worst case (and generally unrealistic) test, the card only falls by less than 20Mhz to 900MHz sustained."
So it does indeed throttle down to 900MHz even with the cap taken off its 190W rated TDP and a more efficient cooler. *IF* it was limited to a 190W hard TDP target *OR* it was forced to use the stock reference cooler, it is highly likely it would indeed have problems maintaining its Boost, just as I stated. AMD's reference specs trade performance for efficiency, once performance is increased that efficiency is reduced and its TDP increases.
Alexvrb - Friday, September 12, 2014 - link
Look, I get it, you're an Nvidia fanboy. But at least you admitted you were wrong, in your own way, finally. It sustains boost fine. Furmark makes a lot of cards throttle - including Maxwell! Whoops! Should we start saying that Maxwell can't hold boost because it throttles in Furmark? No, because that would be idiotic. I think Maxwell is a great design.However, so is Tonga. Read THG's review of the 285. Not only does it slightly edge out the 280 on average performance, but it uses substantially less power. Like, 40W less. I'm not sure what's Sapphire (the card reviewed here) is doing wrong exactly - the Gigabyte Windforce OC is fairly miserly and has similar clocks.
chizow - Saturday, September 13, 2014 - link
LOL Nvidia fanboy, that's rich coming from the Captain of the AMD Turd-polishing Patrol. :DI didn't admit I was wrong, because my statement to any non-idiot was never dependent on maintaining Boost in the first place, but I am glad to see not only is 285 generally slower than the 280 without significant overclocks, it still has trouble maintaining Boost despite higher TDP than the rated 190W and a better than reference cooler.
You could certainly say Maxwell doesn't hold boost because it throttles in Furmark, but that would prove once and for all you really have no clue what you are talking about since every Nvidia card they introduced since they invented Boost has no problems whatsoever hitting their rated Boost speeds even with the stock reference blower designs. The difference of course, is that Nvidia takes a conservative approach to their Boost ratings that they know all their cards can hit under all conditions, unlike AMD which takes the "good luck/cherry picked" approach (see: R290/290X launch debacle).
And finally about other reviews lol, for every review that says the 285 is better than the 280 in performance and power consumption, there is at least 1 more that echo the sentiments of this one. The 285 barely reaches parity and doesn't consume meaningfully less power in doing so. But keep polishing that turd! This is an ASIC only a true AMD fanboy could love some 3 years after the launch of the chip it is set to replace.
chizow - Saturday, September 13, 2014 - link
Oh and just to prove I can admit when I am wrong, you are right, Maxwell did throttle and fail to meet its Boost speeds for Furmark, but these are clearly artificially imposed driver limitations as Maxwell showed it can easily OC to 1300MHz and beyond:http://www.anandtech.com/show/7764/the-nvidia-gefo...
http://www.anandtech.com/show/7854/nvidia-geforce-...
Regardless, any comparisons of this chip to Maxwell are laughable, Maxwell introduced same performance at nearly 50% reduction in TDP or inversely, nearly double the performance at the same TDP all at a significantly reduced price point on the same process node.
What does Tonga bring us? 95-105% of R9 280's performance at 90% TDP and 120% of the price almost 3 years later? Who would be happy with this level of progress?
Nvidia is set to introduce their performance midrange GM104-based cards next week, do you think ANYONE is going to draw parellels between those cards and Tonga? We already know what Maxwell is capable of and it set the bar extremely high, so if GTX 970 and 980 come anywhere close to those increases in performance and efficiency, this part is going to look even worst than it does now.
Alexvrb - Saturday, September 13, 2014 - link
You were wrong about it being unable to hold boost, you claimed that GCN 1.1 can't hold boost despite clear evidence to the contrary. Silly. Then you were wrong about Maxwell and Furmark - though you kind of admitted you were wrong.Regarding that being a "driver limitation" you can clock a GPU to the moon, and it's fine until it gets a heavy load. However most users won't even know they're being throttled. I had this same discussion YEARS ago with a Pentium 4 guy. You can overclock all you want - when you load it up heavy, it's a whole new game. In that case the user never noticed until I showed him his real clocks while running a game.
Tonga averages a few % higher performance, dumps less heat into your case, and uses less power. Aside from this Sapphire Dual X, most 285 cards seem to use quite a bit less power, run cool and quiet. With all that being said, I think the 280 and 290 series carry a much better value in AMD's lineup. I'm certainly not a fanboy, you're much closer to claiming that title. I've actually used mostly Nvidia cards over the years. I've also used graphics from 3dfx, PowerVR, and various integrated solutions. My favorite cards over the years were a trusty Kyro II and a GeForce 3 vanilla, which was passively cooled until I got ahold of it. Ah those were the days.
chizow - Monday, September 15, 2014 - link
No, I said being a GCN 1.1 part meant it was *more likely* to not meet its Boost targets, thus overstating its theoretical performance relative to the 280. And based on the GCN 1.1 parts we had already seen, this is true, it was MORE LIKELY to not hit its Boost targets due to AMD's ambiguous and non-guaranteed Boost speeds. None of this disproved my original point that the 285's theoreticals were best-case and the 280's were worst-case and as these reviews have shown, the 280 would be faster than the 285 in stock configurations. It took an overclocked part with 3rd party cooling and higher TDP (closer to the 280) for it to reach relative parity with the 280.Tonga BARELY uses any less power and in some cases, uses more, is on par with the part it replaces, and costs MORE than the predecessor part it replaces. What do you think would happen if Nvidia tried to do the same later this week with their new Maxwell parts? It would be a complete and utter disaster.
Stop trying to put lipstick on a pig, if you are indeed as unbiased as you say you are you can admit there is almost no progress at all with this part and it simply isn't worth defending. Yes I favor Nvidia parts but I have used a variety in the past as well including a few highly touted ATI/AMD parts like the 9700pro and 5850. I actually favored 3dfx for a long time until they became uncompetitive and eventually bankrupt, but now I prefer Nvidia because much of their enthusiast/gamer spirit lives on and it shows in their products.
Alexvrb - Tuesday, September 16, 2014 - link
"if other GCN 1.1 parts like Hawaii are any indication, it's much more likely the 280 maintains its boost clocks compared to the 285 (due to low TDP limits)"This is what you said. This is where I disagreed with you. The 285 maintains boost just as well as the 280. Further, GCN 1.1 Bonaire and even Hawaii reach and hold boost at stock TDP. The 290 series were not cooled sufficiently using reference coolers, but without any changes to TDP settings (I repeat, stock TDP) they boost fine as long as you cool them. GCN 1.1 boosts fine, end of story.
As far as Tonga goes, there's almost no progress in performance terms. In terms of power it depends on the OEM and I've seen good and bad. The only additions that really are interesting are the increased tessellation performance (though not terribly important at the moment) and finally getting TrueAudio into a mid-range part (it should be across the board by next gen I would hope - PS4 and XB1 have the same Tensilica DSPs).
I would hope they do substantially better with their future releases, or at least release a competent reference design that shows off power efficiency better than some of these third party designs.
chizow - Wednesday, September 17, 2014 - link
Yes, and my comment was correct, it will ALWAYS be "more likely" the 280 maintains its boost over other GCN 1.x parts because we know the track record of GCN 1.0 cards and their conservative Boost compared to post-PowerTune GCN1.x and later parts as a result of the black eye caused by Hawaii. There will always be a doubt due to AMD's less-than-honest approach to Boost with Hawaii, plain and simple.I also (correctly) qualified my statement by saying the low stated TDP of the 285 would be a hindrance to exceeding those rated specs and/or the performance of the 280, and we also see that is the case that in order to exceed those speed limits, AMD traded performance for efficiency to the point the 285's power consumption is actually closer to the 250W rated 280.
In any case, in another day or two, this unremarkable part is going to become irrelevant with GM104 Maxwell, no need to further waste any thoughts on it.
etherlore - Thursday, September 11, 2014 - link
Speculating here. The data parallel instructions could be a way to share data between SIMD lanes. I could see this functionality being similar in functionality to what threadgroup local store allows, but without explicit usage of the local store.It's possible this is an extension to, or makes new use of, the 32 LDS integer units in GCN. (section 2.3.2 in the souther islands instruction set docs)
vred - Thursday, September 11, 2014 - link
And... DP rate at last. Sucks to have it at 1/16 but at least now it's confirmed. First review where I see this data published.chizow - Thursday, September 11, 2014 - link
It has to be artificially imposed, as AMD has already announced FirePro cards based on the Tonga ASIC that do not suffer from this castrated DP rate. AMD as usual taking a page from Nvidia's playbook, so now all the AMD fans poo-poo'ing Nvidia's sound business decisions can give AMD equal treatment. Somehow I doubt that will happen though!Samus - Thursday, September 11, 2014 - link
If this is AMD's Radeon refresh, if the 750Ti tells us anything, they are screwed when Maxwell hits the streets next month.Atari2600 - Thursday, September 11, 2014 - link
The one thing missed in all this - APUs.As we all know, APUs are bandwidth starved. A 30-40% increase in memory subsystem efficiency will do very nicely for removing a major bottleneck.
That is before the move to stacked chips or eDRAM.
limitedaccess - Thursday, September 11, 2014 - link
@RyanRegarding the compression (delta color compression) changes for Tonga does this have any effect on the actual size of data stored in VRAM.
For instance if you take a 2gb Pitcarin card and a 2gb Tonga card showing the identical scene in a game will they both have identical (monitored) VRAM usage? Assuming of course the scenario here is neither is actually hitting the 2gb VRAM limit.
I'm wondering if it possible to test whether or not this is the case if unconfirmed.
Ryan Smith - Sunday, September 14, 2014 - link
VRAM usage will differ. Anything color compressed will take up less space (at whatever ratio the color compression algorithm allows). Of course this doesn't account for caching and programs generally taking up as much VRAM as they can, so it doesn't necessarily follow that overall VRAM usage will be lower on Tonga than Pitcairn. But it is something that can at least be tested.abundantcores - Thursday, September 11, 2014 - link
I see Anand still don't understand the purpose of Mantle, if they did they wouldn't be using the most powerful CPU they could find, i would explain it to them but i think its already been explained to them a thousand times and they still don't grasp it.Anand are a joke, they have no understanding of anything.
chizow - Thursday, September 11, 2014 - link
If Tonga is a referendum on Mantle, it basically proves Mantle is a failure and will never succeed. This pretty much shows most of what AMD said about Mantle is BS, that it takes LESS effort (LMAO) on the part of the devs to implement than DX.If Mantle requires both an application update (game patch) from devs AFTER the game has already run past its prime shelf-date AND also requires AMD to release optimized drivers every time a new GPU is released, then there is simply no way Mantle will ever succeed in a meaningful manner with that level of effort. Simply put, no one is going to put in that kind of work if it means re-tweaking every time a new ASIC or SKU is released. Look at BF4, its already in the rear-view mirror from DICE's standpoint, and no one even cares anymore as they are already looking toward the next Battlefield#
TiGr1982 - Thursday, September 11, 2014 - link
Please stop calling GPUs ASICs - this looks ridiculous.Please go to Wikipedia and read what "ASIC" is.
chizow - Thursday, September 11, 2014 - link
Is this a joke or are you just new to the chipmaking industry? Maybe you should try re-reading the Wikipedia entry to understand GPUs are ASICs despite their more recent GPGPU functionality. GPU makers like AMD and Nvidia have been calling their chips ASICs for decades and will continue to do so, your pedantic objections notwithstanding.But no need to take my word for it, just look at their own internal memos and job listings:
https://www.google.com/#q=intel+asic
https://www.google.com/#q=amd+asic
https://www.google.com/#q=nvidia+asic
TiGr1982 - Thursday, September 11, 2014 - link
OK, I accept your arguments, but I still don't like this kind of terminology. To me, one may call things like fixed-function video decoder "ASIC" (for example UVD blocks inside Radeon GPUs), but not GPU as a whole, because people do GPGPU for a number of years on GPUs, and "General Purpose" in GPGPU contradicts with "Aplication Specific" in ASIC, isn't it?So, overall it's a terminology/naming issue; everyone uses the naming whatever he wants to use.
chizow - Thursday, September 11, 2014 - link
I think you are over-analyzing things a bit. When you look at the entire circuit board for a particular device, you will see each main component or chip is considered an ASIC, because each one has a specific application.For example, even the CPU is an ASIC even though it handles all general processing, but its specific application for a PC mainboard is to serve as the central processing unit. Similarly, a southbridge chip handles I/O and communications with peripheral devices, Northbridge handles traffic to/from CPU and RAM and so on and so forth.
TiGr1982 - Thursday, September 11, 2014 - link
OK, then according to this (broad) understanding, every chip in silicon industry may be called ASIC :)Let it be.
chizow - Friday, September 12, 2014 - link
Yes, that is why everyone in the silicon industry calls their chips that have specific applications ASICs. ;)Something like a capacitor, or resistor etc. would not be as they are of common commodity.
Sabresiberian - Thursday, September 11, 2014 - link
I reject the notion that we should be satisfied with a slower rate of GPU performance increase. We have more use than ever before for a big jump in power. 2560x1440@144Hz. 4K@60Hz.Of course it's all good for me to say that without being a micro-architecture design engineer myself, but I think it's time for a total re-think. Or if the companies are holding anything back - bring it out now, please! :)
Stochastic - Thursday, September 11, 2014 - link
Process node shrinks are getting more and more difficult, equipment costs are rising, and the benefits of moving to a smaller node are also diminishing. So sadly I think we'll have to adjust to a more sedate pace in the industry.TiGr1982 - Thursday, September 11, 2014 - link
I'm a longstanding AMD Radeon user for more than 10 years, but after reading this R9 285 review I can't help but think that, based on results of smaller GM107 in 750 Ti, GM204 in GTX 970/980 may offer much better performance/Watt/die area (at least for gaming tasks) in comparison to the whole AMD GPU lineup. Soon we'll see whether or not this will be the case.TiGr1982 - Thursday, September 11, 2014 - link
BTW, is Tonga the only new GPU AMD has to offer in 2014?(if I'm not mistaken, the previous one from AMD, Hawaii, was released back in October 2013, almost a year ago)
Does anybody know?
HisDivineOrder - Thursday, September 11, 2014 - link
The thing is the moment I heard AMD explaining how Tonga was too new for current Mantle applications, I was like, "And there the other shoe is dropping."The promise of low level API is that you get low level access and the developer gets more of the burden of carrying the optimizations for the game instead of a driver team. This is great for the initial release of the game and great for the company that wants to have less of a (or no) driver team, but it's not so great for the end user who is going to wind up getting new cards and needing that Mantle version to work properly on games no longer supported by their developer.
It's hard enough getting publishers and/or developers to work on a game a year or more after release to fix bugs that creep in and in some cases hard to get them to bother with resolution switches, aspect ratio switches, the option to turn off FXAA, the option to choose a software-based AA of your choice, or any of a thousand more doohickeys we should have by now as bog-standard. Can you imagine now relying on that developer--many of whom go completely out of business after finishing said title if they happen to work for Activision or EA--to fix all the problems?
This is why a driver team is better working on it. Even though the driver team may be somewhat removed from the development of the game, the driver team continues to have an incentive to want to fix that game going forward, even if it's a game no longer under development at the publisher. You're going to be hard pressed to convince Bobby Kotick at Activision that it's worth it to keep updating versions of games older than six months (or a year for Call of Duty) because at a certain point, they WANT you to move on to another game. But nVidia and AMD (and I guess Intel?) want to make that game run well on next gen cards to help you move.
This is where Mantle is flawed and where Mantle will never recover. Every time they change GCN, it's going to wind up with a similar problem. And every time they'll wind up saying, "Just switch to the DX version." If Mantle cannot be relied upon for the future, then it is Glide 2.0.
And why even bother at all? Just stick with DirectX from the get-go, optimize for it (as nVidia has shown there is plenty of room for improvement), and stop wasting any money at all on Mantle since it's a temporary version that'll rapidly be out of date and unusable on future hardware.
The-Sponge - Thursday, September 11, 2014 - link
I do not understand how they got there R9 270x temperatures, my OC'd R9 270x never even comes close to the temps they got....mac2j - Friday, September 12, 2014 - link
It's great that they've caught up with H.264 on hardware and the card otherwise looks fine. The bottom line for me, though, is that I don't see the point of buying card now without H.265 on hardware and an HDMI 2.0 port - 2 things Maxwell will bring this year. I haven't heard what AMDs timetable is there though.P39Airacobra - Friday, October 17, 2014 - link
It really irritates me that they are making these cards throttle to keep power and temps down! That is pathetic! If you can't make the thing right just don't make it! Even if it throttles .1mhz it should not be tolerated! We pay good money for this stuff and we should get what we pay for! It looks like the only AMD cards worth anything are the 270's and under. It stinks you have to go Nvidia to get more power! Because Nvidia really rapes people with their prices! But I must say the GTX 970 is priced great if it is still around $320. But AMD should have never even tried with this R9 285! First of all when you pay that much you should get more than 2GB. And another thing the card is pretty much limited to the performance of the R9 270's because of the V-Ram count! Yeah the 285 has more power than the 270's, But whats the point when you do not have enough V-Ram to take the extra power were you need a card like that to be? In other words if you are limited to 1080p anyway, Why pay the extra money when a R7 265 will handle anything at 1080p beautifully? This R9 285 is a pointless product! It is like buying a rusted out Ford Pinto with a V-8 engine! Yeah the engine is nice! But the car is a pos!P39Airacobra - Friday, January 9, 2015 - link
(QUOTE) So a 2GB card is somewhat behind the times as far as cutting edge RAM goes, but it also means that such a card only has ¼ of the RAM capacity of the current-gen consoles, which is a potential problem for playing console ports on the PC (at least without sacrificing asset quality).(SIGH) So now even reviewers are pretending the consoles can outperform a mid range GPU! WOW! How about telling the truth like you did before you got paid off! The only reason a mid range card has problems with console ports is because they are no longer optimized! They just basically make it run on PC and say xxxx you customers here it is! And no the 8GB on the consoles are used for everything not for only V-Ram! We are not stupid idiots that fall for anything like the idiots in Germany back in the 1930's!