When reading the original article yesterday I thought:"Oh dear, this whole Mantle business seems like a huge waste of development time." The current results change this considerably to:"Wow, they're doing the right thing!"
Improving minimum frame rates is what really counts towards making a game / application feel smooth. From my point of view the GPU turbo modes should also be used to equalize maximum and minimum frame rates: there's no point in rendering at super-high frame rates, especially with Free-/G-Sync. Better throttle the GPU a bit at light to moderate loads to have some thermal budget to spare for short bursts of high load.
It's just a frame rate pacing issue -- I've added two images to show what's going on. Basically the AFR frame times are all over the place compared to the Mantle and single GPU frame times.
This ends up being akin to the SSD consistency performance. Hence, there is value in Mantle. What would really be cool would be to see what the CPU was up to at the same time. The sad part is Jarred's electricity bill probably spiked this month from testing the 290X in CF.
I don't have an SLI configuration yet, so I can't test it. Single GPU frame times are fine, but this was specifically looking at CrossFire D3D vs. Mantle. (FWIW, a colleague at another web site is reporting rather jittery frame times on SLI -- hopefully not as bad as D3D CF, though.)
The SLI isn't that important here, heck just thrown in a SINGLE 980/970. Nobody is using either SLI or CF. I say nobody when it is less than 3% of the public (according to steam's surveys). Mind you, that is a percent of that 3% that runs above 1920x1200. So it's more important (to 97% of us anyway) to at least show the single 980/970 in the charts ALWAYS.
As others have said, you need to show the other side (SLI or not, some may not have read the other story). IF NV+DX11 is beating them already who cares about this then? If not, show that. That is why they need to be in there, even if just single cards, to answer the question of DX11 on NV vs. everything you're showing from AMD.
You already have the results from there, just add them. From that article the bottom 1% doesn't mean much and you saw no problems in the game. Comic you say the bang for buck winner is 290x because it can be had for under $400. You can get the 970 for $330. Bang for buck winner should be the one who wins in many games not just this one, and the 970 topples 290x in a lot of stuff. http://www.tomshardware.com/reviews/nvidia-geforce... Especially where most of us play (below 1920x1200, only 3% above that). IN many cases 970 beats 290x by quite a lot, but even if 290x won much of it by 10% you'd have a hard time claiming it was the bang for buck champ (costing 10% more it should be winning by 10% in EVERY game, mantle or not). This isn't even counting the OCing that can be done on 970/980 or the noise (as even you pointed out in the link above) that 280/290 pound out.
And FWIW, very few of us run SLI or CF as I already noted, as it's a percentage of the 3% that run above 1920x1200...LOL. Concentrating on SLI/CR is writing an article for an audience the size of a percent of 3% of the public (so like 1.5% of us overall?) ;) Judging the amount of games using mantle that show AMD victories, I'll take a 970 to go please...ROFL. Then again, I'll be waiting for the 20nm versions which should make an already power sipping maxwell that much better, but you get the point. That AMD portal costs you guys a lot of objective journalism IMHO.
This story isn't about nvidia, or which company/card has a bigger nis, it was about SFR and rendering technologies.
They've already done a Crossfire Vs. SLI test, there's one for pretty much every single card in every single segment of the market, why do we need to just put the same benchmark on blast every single time and ignore everything else in the world?
Anandtech should only do AMD vs Nvidia articles and nothing else? That makes sense to you?
I mean just re-hash these me dead article 365 days a year, NEVER do anything else, God forbid someone like you stumble upon a non Radeon vs Geforce headline... makes perfect sense, derp.
I doubt NV would ever fall significantly behind on the software/API front like that, and I say that as the happy owner of two R9 290 in CF... They might find a sneaky way to undermine Mantle rather than compete directly with it, but they're not gonna take it in the chin. :p
The problem is the above article doesn't show the number one important detail about Mantle: is it actually better, or is AMD just slacking on their D3D11 drivers?
That's why an NVIDIA data point is important here. Only once you show that the competition cannot match the performance consistency can you really determine the usefulness of Mantle. From what I've seen in the past, NVIDIA's D3D11 offering is much better than AMD, and in some cases rivaled the Mantle driver.
I don't have one, but that's what AMD told me as well -- FX-8350 is able to match i7-4790K with Mantle. Of course AM3+ is also a dying platform and the FX CPUs use a lot more power than the Haswell parts.
the FX 8 series do not use a lot more power, they use more power performance aside, if system A uses 50 Watts more than system B At 12p per KWH, system A is £130 cheaper than system B (Price difference between i7 4790 and FX 8350, assuming all other prices are the same)
It Will take 21667 Hours to get your money back, at 4 hours per day that is just short of 15 years, no one is going to keep the computer long enough for the power to make a difference as a home user, this is different if your are running an office with hundreds of systems
Overall it is looking like the Mantle split frame rendering ("scissor mode") is a step forward.
There is a lot less stutter with the split frame rendering than with AFR. The minimum frame rates are higher than compared to single and double GPU, and that is what matters the most. I suspect that so long as Nvidia uses AFR as well, their results will be similar to the Crossfire non-Mantle performance.
Personally, I wish that scissors mode was more widespread and that there was an emphasis on minimum frame rates, rather than maximizing average FPS.
On that note, there is one other issue that is unrelated to all of this that makes me want to skip this title. The game play itself is said to be disappointing. It shares all of the drawbacks of Civ V, and none of the advantages of Alpha Centauri. I suspect that the expansions will be unable to fix the problem.
There is one other issue - are 3 and 4 GPU setups compatible with SFR?
At present Mantle and SFR only support two GPUs. Firaxis is supposedly working on a fix to support three and four GPU configurations, but I don't have a time frame for that. I also suspect the scaling will be a case of diminishing returns (as usual for 3-way and 4-way setups).
Hiding the setting in a config file where even a professional reviewer can't find it is not acceptable. What it really means for most players is that they'll end up with the performance you showed in the first review. I hope they patch this into the settings menu.
Their own old AM3+ platform does not even support PCIe 3.0 (with a very few specific MB exceptions), so their own PCIe 3.0-capable GPUs can be potentially limited on AM3+ being stuck with PCIe 2.0. This is somewhat riduculous, but this is how it is. FM2+ with Kaveri APU do support PCIe 3.0, but Kaveri's CPU part is like i3 and is not up to the job of feeding, say, Hawaii GPU(s) (especially more than one). So, AM3+ is old, FM2+ is budgetary, and, so, AMD top GPUs are left with Intel platforms to run. That's how it is.
Yes, that's the case - AMD CPUs don't make a lot of sense for high performance for a long time already. BTW, I just got very slightly used reference R9 290 - in my personal circumstances effectively as a trade-in for $50 (yes, fifty), and for the price it's a nice upgrade from my HD 7950 :) PSU is actually doing OK, no need to upgrade the PSU. R9 290 is not that bad - it's pretty fast and furious and not so badly loud as people use to think. As an almost free upgrade, it's OK :)
Or maybe most readers have Intel CPUs and in most cases it makes sense to bench with the CPU that's less likely to be a bottleneck... That would make far too much sense tho.
I just don't get how someone can ask that when the entire banner at the top is branded as "AMD CENTER" and "Presented by AMD"... this isn't somehow some Intel conspiracy.
Honestly, I don't even own an AMD desktop any longer -- and why would I, seeing as I have a couple with Intel Haswell and Ivy Bridge? I've mentioned this in buyer's guides before, but when you're purchasing an entire system the difference in price of $100 is often negligible when you consider power and performance.
Anyway, AMD may be sending me a Kaveri A10 setup, which would allow for additional testing. Keep in mind that also means twice the number of benchmarks to run if I test every single option. This is the same reason I didn't test my i3-3225: lack of time. But we'll see what I can do in the meantime....
I wouldn't even bother tbh, while underpowered AMD APUs might show some benefit from Mantle, they are also the least likely use-cases, ie. really underpowered AMD CPUs coupled with really powerful AMD GPU solutions.
Read my comment above regarding technical side of using AMD CPUs with top GPUs. AMD CPU platforms are just not there to be used with serious graphics firepower these days.
So when DX12 comes out, what happens to Mantle? What impact would having to juggle between all of these very different API's (DX9, DX10/10.1, DX11.0/.1/.2, DX12, Mantle) have on driver stability?
Stability will remain as it is right now. Hell, stability has remained pretty much at the same level since Vista launched with a new driver model.
What may change is the level of support, but don't expect much. Thanks to the XBone, DX11.X (that's an actual version that adds a few DX12 bits to it) will become the "legacy"/baseline platform, while DX12 moves forward as the prime platform. Older DX versions will be pretty much left as is as much as possible: no point breaking the current good and stable versions.
This really brings home the point that plain averaged FPS has long outlived its usefulness as a metric of gaming performance. Frame times, or minimum instantenous FPS should be the new standard.
Yes Nvidia drove this point home over a year ago when they graciously revealed their FCAT profiling tools to the industry at a time when various naysayers insisted there were no frametime/frame latency issues with competitor solutions.
The original article has NVIDIA single GPU results. The point of this one is to highlight that the Mantle implementation works, as well as to point out its benefits as regards a more stable frame rate. Having NV results within this article would've just been confusing considering it's about comparing API performance on AMD cards. Just load both articles, put them side by side, and compare... or just wait for the SLI results before you do so.
Stability will remain as it is right now. Hell, stability has remained pretty much at the same level since Vista launched with a new driver model.
What may change is the level of support, but don't expect much. Thanks to the XBone, DX11.X (that's an actual version that adds a few DX12 bits to it) will become the "legacy"/baseline platform, while DX12 moves forward as the prime platform. Older DX versions will be pretty much left as is as much as possible: no point breaking the current good and stable versions.
So is this AMD's response to G-Sync? Use 2 GPUs to reduce multi-GPU input latency/jitter? I thought it might actually be something cool like Nvidia's SLI AA where each GPU rendered slightly offset frames and merged them in framebuffer to produce a 2x sampled image, but I guess they would go this route if the game is already CPU-limited and already having trouble producing more frames.
Interesting to see Nvidia has no problems beating AMD without Mantle however.
Uh, no. AMD is working on FreeSync, an adaptive-sync based solution to counter Nvidia's vendor locked, expensive G-sync. It sounds like you need to do some more reading before posting as you're mixing up your technologies.
AMD's response to G-Sync is FreeSync DisplayPort-based technology.
AMD Hawaii GPU already supports FreeSync in hardware, according to AMD website; all one needs is the FreeSync supporting monitor (some gonna be soon in retail, in a matter of several months) - connect Hawaii card to it and there you go.
chizow, if you are going to talk about something non-nVidia, then you have to get out of your beloved nVidia sandbox and read some things first.
A couple of things. SFR isn't new as this article points out. However, it is different conceptually than 3dfx's SLI technology. What 3dfx did was interlacing by having one GPU render the even horizontal lines while the other renders the odd lines. This also took care of the problem of load balancing the workload as neighboring lines generally took the same amount of work to render. SFR reappeared with DirectX 9 and the Geforce 7950GX2. So nVidia implemented a load balancing scheme so that an upper half of a frame and lower half would be rendered on two different GPU's. Any dual GPU Geforce setup at the time had the option of using SFR in the drivers though they defaulted to AFR due to simplicity and compatibility. A dual Geforce 7950GX2 setup had the option of using pure AFR, pure SFR or a hybrid AFR + SFR. Scaling problems were abound on a dual 7950GX2 though. DX9 could only render a max of three frames simultaneously in its pipeline so pure AFR could only use 3 GPUs. SFR had scaling problems of its own due to the load balancing algorithm, especially in the pure SFR quad GPU scenario. The AFR + SFR scenario was interesting but incurred the bugginess of both implementations with no overall performance advantage.
Things are a bit different than they were 8 years ago when the Geforce 7950GX2 launched. This SFR implementation is being done at the application level and not at the driver level. Due to the context, the results should be far better than in the past. The application can also feed predictive information into the SFR load balancer algorithm that a driver implementation would never have access to (and rightfully should not). This also leaves open the possibility of SFR + AFR in a quad GPU system. I'm really curious what frame rate controls Mantle exposes to the developers at the application level as this could help eliminate the latency issues by direct control. Being able to scale by both SFR and AFR opens the door 6 way and greater GPU systems actually being useful for gaming (they can only really be used for compute currently).
The big downside with Mantle in this scenario (outside of Mantle currently being an AMD only API) is that the game developers have to tackle the handling of multiple GPUs. Personally I just don't see developers putting in the the time, effort and money to increase performance in these incredibly niche systems.
Late in the game, you can get a map with tons of units on the screen, which can result in lots more draw calls than what you would get from a typical FPS. So if Mantle can increase the number of draw calls you can do, this will raise the minimum frame rate and average frame rate quite a bit. A great example of this is the minimum frame rates with the R9 290X. They go from 48 FPS to 56 FPS even at QHD, and at 1080p the difference is even more dramatic (49 to 68 FPS). And that's with a relatively beefy i7-4770K OC.
You really shouldn't apologize. Your experience is the same one most users would have had if people like you didn't call them out for not enabling support when the setting is enabled in-game.
Do you really think most people are going to realize something is amiss and that a text file ALSO has to be edited?
Nope. They'll just go on in ignorance because they'll assume--as most anyone would assume--that the game will work the way it's supposedly advertised to work.
Whoops. If anyone should be apologizing, it's the person who chose to make editing a text file necessary to make the feature work. It also begs the question, "Why?" Is the feature still in beta? Alpha? A work in progress? Is that why text files need to be edited?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
61 Comments
Back to Article
MrSpadge - Friday, October 24, 2014 - link
Ay Caramba!When reading the original article yesterday I thought:"Oh dear, this whole Mantle business seems like a huge waste of development time." The current results change this considerably to:"Wow, they're doing the right thing!"
Improving minimum frame rates is what really counts towards making a game / application feel smooth. From my point of view the GPU turbo modes should also be used to equalize maximum and minimum frame rates: there's no point in rendering at super-high frame rates, especially with Free-/G-Sync. Better throttle the GPU a bit at light to moderate loads to have some thermal budget to spare for short bursts of high load.
djscrew - Sunday, October 26, 2014 - link
+1eddman - Friday, October 24, 2014 - link
That Directx CF minimum FPS looks suspicious to me. You sure it's not a game bug?JarredWalton - Friday, October 24, 2014 - link
It's just a frame rate pacing issue -- I've added two images to show what's going on. Basically the AFR frame times are all over the place compared to the Mantle and single GPU frame times.eanazag - Friday, October 24, 2014 - link
This ends up being akin to the SSD consistency performance. Hence, there is value in Mantle. What would really be cool would be to see what the CPU was up to at the same time. The sad part is Jarred's electricity bill probably spiked this month from testing the 290X in CF.whyso - Friday, October 24, 2014 - link
You can't compare Mantle vs. DX without looking at Nvidia as well. Where are the nvidia frame time charts?JarredWalton - Friday, October 24, 2014 - link
I don't have an SLI configuration yet, so I can't test it. Single GPU frame times are fine, but this was specifically looking at CrossFire D3D vs. Mantle. (FWIW, a colleague at another web site is reporting rather jittery frame times on SLI -- hopefully not as bad as D3D CF, though.)TheJian - Monday, October 27, 2014 - link
The SLI isn't that important here, heck just thrown in a SINGLE 980/970. Nobody is using either SLI or CF. I say nobody when it is less than 3% of the public (according to steam's surveys). Mind you, that is a percent of that 3% that runs above 1920x1200. So it's more important (to 97% of us anyway) to at least show the single 980/970 in the charts ALWAYS.As others have said, you need to show the other side (SLI or not, some may not have read the other story). IF NV+DX11 is beating them already who cares about this then? If not, show that. That is why they need to be in there, even if just single cards, to answer the question of DX11 on NV vs. everything you're showing from AMD.
http://www.anandtech.com/show/8640/benchmarked-civ...
You already have the results from there, just add them. From that article the bottom 1% doesn't mean much and you saw no problems in the game. Comic you say the bang for buck winner is 290x because it can be had for under $400. You can get the 970 for $330. Bang for buck winner should be the one who wins in many games not just this one, and the 970 topples 290x in a lot of stuff.
http://www.tomshardware.com/reviews/nvidia-geforce...
Especially where most of us play (below 1920x1200, only 3% above that). IN many cases 970 beats 290x by quite a lot, but even if 290x won much of it by 10% you'd have a hard time claiming it was the bang for buck champ (costing 10% more it should be winning by 10% in EVERY game, mantle or not). This isn't even counting the OCing that can be done on 970/980 or the noise (as even you pointed out in the link above) that 280/290 pound out.
And FWIW, very few of us run SLI or CF as I already noted, as it's a percentage of the 3% that run above 1920x1200...LOL. Concentrating on SLI/CR is writing an article for an audience the size of a percent of 3% of the public (so like 1.5% of us overall?) ;) Judging the amount of games using mantle that show AMD victories, I'll take a 970 to go please...ROFL. Then again, I'll be waiting for the 20nm versions which should make an already power sipping maxwell that much better, but you get the point. That AMD portal costs you guys a lot of objective journalism IMHO.
Navvie - Monday, October 27, 2014 - link
This is comment of the year.ZeDestructor - Monday, October 27, 2014 - link
I'm waiting for big Maxwell on 20nm, at which point I'll grab a pair for 5760x1200 gaming :)/me is in the 0.33% ^_^
Stuka87 - Monday, October 27, 2014 - link
Serious, this huge rant and all you had to do was look at the other article?http://anandtech.com/show/8640/benchmarked-civiliz...
Bleakwise - Saturday, August 13, 2016 - link
This story isn't about nvidia, or which company/card has a bigger nis, it was about SFR and rendering technologies.They've already done a Crossfire Vs. SLI test, there's one for pretty much every single card in every single segment of the market, why do we need to just put the same benchmark on blast every single time and ignore everything else in the world?
Anandtech should only do AMD vs Nvidia articles and nothing else? That makes sense to you?
Bleakwise - Saturday, August 13, 2016 - link
I mean just re-hash these me dead article 365 days a year, NEVER do anything else, God forbid someone like you stumble upon a non Radeon vs Geforce headline... makes perfect sense, derp.eanazag - Friday, October 24, 2014 - link
Sure he can. It is not a thorough analysis of the entire scope of the GPU market. If he did an analysis on PhysX, would that make you happy?Nvidia doesn't have a Mantle counterpart yet, or maybe ever. This was still a worthwhile read.
Impulses - Friday, October 24, 2014 - link
I doubt NV would ever fall significantly behind on the software/API front like that, and I say that as the happy owner of two R9 290 in CF... They might find a sneaky way to undermine Mantle rather than compete directly with it, but they're not gonna take it in the chin. :pinighthawki - Friday, October 24, 2014 - link
The problem is the above article doesn't show the number one important detail about Mantle: is it actually better, or is AMD just slacking on their D3D11 drivers?That's why an NVIDIA data point is important here. Only once you show that the competition cannot match the performance consistency can you really determine the usefulness of Mantle.
From what I've seen in the past, NVIDIA's D3D11 offering is much better than AMD, and in some cases rivaled the Mantle driver.
HalloweenJack - Friday, October 24, 2014 - link
Can you bench using an AMD octo core with mantle? hearing its a match for intel in this oneJarredWalton - Friday, October 24, 2014 - link
I don't have one, but that's what AMD told me as well -- FX-8350 is able to match i7-4790K with Mantle. Of course AM3+ is also a dying platform and the FX CPUs use a lot more power than the Haswell parts.HalloweenJack - Friday, October 24, 2014 - link
well AMD have released new cpu`s for AM3+ recently , so they might not think its dying ;)MrSpadge - Saturday, October 25, 2014 - link
Except that these new CPUs are hardly different from the old ones, and should have been released 2 years ago.Tikcus9666 - Saturday, October 25, 2014 - link
the FX 8 series do not use a lot more power, they use more powerperformance aside, if system A uses 50 Watts more than system B
At 12p per KWH, system A is £130 cheaper than system B (Price difference between i7 4790 and FX 8350, assuming all other prices are the same)
It Will take 21667 Hours to get your money back, at 4 hours per day that is just short of 15 years, no one is going to keep the computer long enough for the power to make a difference as a home user, this is different if your are running an office with hundreds of systems
CrazyElf - Friday, October 24, 2014 - link
Overall it is looking like the Mantle split frame rendering ("scissor mode") is a step forward.There is a lot less stutter with the split frame rendering than with AFR. The minimum frame rates are higher than compared to single and double GPU, and that is what matters the most. I suspect that so long as Nvidia uses AFR as well, their results will be similar to the Crossfire non-Mantle performance.
Personally, I wish that scissors mode was more widespread and that there was an emphasis on minimum frame rates, rather than maximizing average FPS.
On that note, there is one other issue that is unrelated to all of this that makes me want to skip this title. The game play itself is said to be disappointing. It shares all of the drawbacks of Civ V, and none of the advantages of Alpha Centauri. I suspect that the expansions will be unable to fix the problem.
There is one other issue - are 3 and 4 GPU setups compatible with SFR?
JarredWalton - Friday, October 24, 2014 - link
At present Mantle and SFR only support two GPUs. Firaxis is supposedly working on a fix to support three and four GPU configurations, but I don't have a time frame for that. I also suspect the scaling will be a case of diminishing returns (as usual for 3-way and 4-way setups).CrazyElf - Friday, October 24, 2014 - link
Is there a way to post a link here?I've got results to show you, but they got flagged as spam.
CrazyElf - Friday, October 24, 2014 - link
Anyways, it's steeply diminishing returns as you note.This is from Udteam's review:
- GTX 780 Ti : 100% -> 74% -> 41% -> 34%
- GTX 780 : 100% -> 68% -> 50% -> 14%
- R9 290X : 100% -> 84% -> 58% -> 30%
- R9 290 : 100% -> 82% -> 59% -> 37%
Average FPS per GPU added. This is in AFR, so with SFR, the scaling would be even lower.
Flunk - Friday, October 24, 2014 - link
Hiding the setting in a config file where even a professional reviewer can't find it is not acceptable. What it really means for most players is that they'll end up with the performance you showed in the first review. I hope they patch this into the settings menu.Impulses - Friday, October 24, 2014 - link
No doubt. Though as a multi screen gamer I'm kinda used to this, just an intrinsic part of PC gaming...eanazag - Friday, October 24, 2014 - link
I love that AMD is testing with Intel CPUs. Intel can lay off some of their marketing team now since AMD is helping them sell more -E series CPUs.#sadbuttrue
The_Assimilator - Saturday, October 25, 2014 - link
Haha, I noticed that too. Seems even AMD's GPU division doesn't want to touch their own company's CPUs.TiGr1982 - Saturday, October 25, 2014 - link
Their own old AM3+ platform does not even support PCIe 3.0 (with a very few specific MB exceptions), so their own PCIe 3.0-capable GPUs can be potentially limited on AM3+ being stuck with PCIe 2.0. This is somewhat riduculous, but this is how it is.FM2+ with Kaveri APU do support PCIe 3.0, but Kaveri's CPU part is like i3 and is not up to the job of feeding, say, Hawaii GPU(s) (especially more than one).
So, AM3+ is old, FM2+ is budgetary, and, so, AMD top GPUs are left with Intel platforms to run. That's how it is.
chizow - Saturday, October 25, 2014 - link
AMD stopped pretending their CPU platforms were a viable solution years ago when Nvidia was outpacing them on their own platform.TiGr1982 - Saturday, October 25, 2014 - link
Yes, that's the case - AMD CPUs don't make a lot of sense for high performance for a long time already.BTW, I just got very slightly used reference R9 290 - in my personal circumstances effectively as a trade-in for $50 (yes, fifty), and for the price it's a nice upgrade from my HD 7950 :) PSU is actually doing OK, no need to upgrade the PSU.
R9 290 is not that bad - it's pretty fast and furious and not so badly loud as people use to think.
As an almost free upgrade, it's OK :)
Gigaplex - Friday, October 24, 2014 - link
No mention of whether or not SFR causes tearing because the seam where the two halves meet weren't rendered identically or properly synchronised?Thracks - Saturday, October 25, 2014 - link
The tiles overlap slightly to address this issue.HalloweenJack - Friday, October 24, 2014 - link
shame anandtech doesn't bench with AMD cpu`s anymore - paid out intel???Impulses - Friday, October 24, 2014 - link
Or maybe most readers have Intel CPUs and in most cases it makes sense to bench with the CPU that's less likely to be a bottleneck... That would make far too much sense tho.klagermkii - Friday, October 24, 2014 - link
I just don't get how someone can ask that when the entire banner at the top is branded as "AMD CENTER" and "Presented by AMD"... this isn't somehow some Intel conspiracy.JarredWalton - Saturday, October 25, 2014 - link
Honestly, I don't even own an AMD desktop any longer -- and why would I, seeing as I have a couple with Intel Haswell and Ivy Bridge? I've mentioned this in buyer's guides before, but when you're purchasing an entire system the difference in price of $100 is often negligible when you consider power and performance.Anyway, AMD may be sending me a Kaveri A10 setup, which would allow for additional testing. Keep in mind that also means twice the number of benchmarks to run if I test every single option. This is the same reason I didn't test my i3-3225: lack of time. But we'll see what I can do in the meantime....
chizow - Saturday, October 25, 2014 - link
I wouldn't even bother tbh, while underpowered AMD APUs might show some benefit from Mantle, they are also the least likely use-cases, ie. really underpowered AMD CPUs coupled with really powerful AMD GPU solutions.TiGr1982 - Sunday, October 26, 2014 - link
Indeed; completely agree.chizow - Saturday, October 25, 2014 - link
AMD doesn't even bench with AMD CPUs anymore, I guess they are paid out by Intel too?silverblue - Saturday, October 25, 2014 - link
Except Jarred has said that AMD's own testing has shown the 8350 to be competitive in this game.TiGr1982 - Sunday, October 26, 2014 - link
Read my comment above regarding technical side of using AMD CPUs with top GPUs.AMD CPU platforms are just not there to be used with serious graphics firepower these days.
D. Lister - Saturday, October 25, 2014 - link
So when DX12 comes out, what happens to Mantle? What impact would having to juggle between all of these very different API's (DX9, DX10/10.1, DX11.0/.1/.2, DX12, Mantle) have on driver stability?ZeDestructor - Saturday, October 25, 2014 - link
Stability will remain as it is right now. Hell, stability has remained pretty much at the same level since Vista launched with a new driver model.What may change is the level of support, but don't expect much. Thanks to the XBone, DX11.X (that's an actual version that adds a few DX12 bits to it) will become the "legacy"/baseline platform, while DX12 moves forward as the prime platform. Older DX versions will be pretty much left as is as much as possible: no point breaking the current good and stable versions.
eddman - Saturday, October 25, 2014 - link
Perhaps if DX12 proves to be fast and developer friendly, then we might see the decline of mantle, but those are all "if"s. Have to wait and see.JlHADJOE - Saturday, October 25, 2014 - link
This really brings home the point that plain averaged FPS has long outlived its usefulness as a metric of gaming performance. Frame times, or minimum instantenous FPS should be the new standard.chizow - Saturday, October 25, 2014 - link
Yes Nvidia drove this point home over a year ago when they graciously revealed their FCAT profiling tools to the industry at a time when various naysayers insisted there were no frametime/frame latency issues with competitor solutions.ZeDestructor - Saturday, October 25, 2014 - link
talking of which, where are the FCAT results for nvidia cards?!chizow - Saturday, October 25, 2014 - link
I think Jarred is working on Nvidia results since there are none included at all.silverblue - Saturday, October 25, 2014 - link
The original article has NVIDIA single GPU results. The point of this one is to highlight that the Mantle implementation works, as well as to point out its benefits as regards a more stable frame rate. Having NV results within this article would've just been confusing considering it's about comparing API performance on AMD cards. Just load both articles, put them side by side, and compare... or just wait for the SLI results before you do so.ZeDestructor - Saturday, October 25, 2014 - link
Stability will remain as it is right now. Hell, stability has remained pretty much at the same level since Vista launched with a new driver model.What may change is the level of support, but don't expect much. Thanks to the XBone, DX11.X (that's an actual version that adds a few DX12 bits to it) will become the "legacy"/baseline platform, while DX12 moves forward as the prime platform. Older DX versions will be pretty much left as is as much as possible: no point breaking the current good and stable versions.
ZeDestructor - Saturday, October 25, 2014 - link
Crap.. meant to reply to D. Lister above....chizow - Saturday, October 25, 2014 - link
So is this AMD's response to G-Sync? Use 2 GPUs to reduce multi-GPU input latency/jitter? I thought it might actually be something cool like Nvidia's SLI AA where each GPU rendered slightly offset frames and merged them in framebuffer to produce a 2x sampled image, but I guess they would go this route if the game is already CPU-limited and already having trouble producing more frames.Interesting to see Nvidia has no problems beating AMD without Mantle however.
Creig - Monday, October 27, 2014 - link
Uh, no. AMD is working on FreeSync, an adaptive-sync based solution to counter Nvidia's vendor locked, expensive G-sync. It sounds like you need to do some more reading before posting as you're mixing up your technologies.TiGr1982 - Tuesday, October 28, 2014 - link
AMD's response to G-Sync is FreeSync DisplayPort-based technology.AMD Hawaii GPU already supports FreeSync in hardware, according to AMD website; all one needs is the FreeSync supporting monitor (some gonna be soon in retail, in a matter of several months) - connect Hawaii card to it and there you go.
chizow, if you are going to talk about something non-nVidia, then you have to get out of your beloved nVidia sandbox and read some things first.
Kevin G - Saturday, October 25, 2014 - link
A couple of things. SFR isn't new as this article points out. However, it is different conceptually than 3dfx's SLI technology. What 3dfx did was interlacing by having one GPU render the even horizontal lines while the other renders the odd lines. This also took care of the problem of load balancing the workload as neighboring lines generally took the same amount of work to render. SFR reappeared with DirectX 9 and the Geforce 7950GX2. So nVidia implemented a load balancing scheme so that an upper half of a frame and lower half would be rendered on two different GPU's. Any dual GPU Geforce setup at the time had the option of using SFR in the drivers though they defaulted to AFR due to simplicity and compatibility. A dual Geforce 7950GX2 setup had the option of using pure AFR, pure SFR or a hybrid AFR + SFR. Scaling problems were abound on a dual 7950GX2 though. DX9 could only render a max of three frames simultaneously in its pipeline so pure AFR could only use 3 GPUs. SFR had scaling problems of its own due to the load balancing algorithm, especially in the pure SFR quad GPU scenario. The AFR + SFR scenario was interesting but incurred the bugginess of both implementations with no overall performance advantage.Things are a bit different than they were 8 years ago when the Geforce 7950GX2 launched. This SFR implementation is being done at the application level and not at the driver level. Due to the context, the results should be far better than in the past. The application can also feed predictive information into the SFR load balancer algorithm that a driver implementation would never have access to (and rightfully should not). This also leaves open the possibility of SFR + AFR in a quad GPU system. I'm really curious what frame rate controls Mantle exposes to the developers at the application level as this could help eliminate the latency issues by direct control. Being able to scale by both SFR and AFR opens the door 6 way and greater GPU systems actually being useful for gaming (they can only really be used for compute currently).
The big downside with Mantle in this scenario (outside of Mantle currently being an AMD only API) is that the game developers have to tackle the handling of multiple GPUs. Personally I just don't see developers putting in the the time, effort and money to increase performance in these incredibly niche systems.
zodiacsoulmate - Sunday, October 26, 2014 - link
no nvidia test.... also i don't really see how mantle is useful in civilization 5...JarredWalton - Monday, October 27, 2014 - link
Late in the game, you can get a map with tons of units on the screen, which can result in lots more draw calls than what you would get from a typical FPS. So if Mantle can increase the number of draw calls you can do, this will raise the minimum frame rate and average frame rate quite a bit. A great example of this is the minimum frame rates with the R9 290X. They go from 48 FPS to 56 FPS even at QHD, and at 1080p the difference is even more dramatic (49 to 68 FPS). And that's with a relatively beefy i7-4770K OC.HisDivineOrder - Monday, October 27, 2014 - link
You really shouldn't apologize. Your experience is the same one most users would have had if people like you didn't call them out for not enabling support when the setting is enabled in-game.Do you really think most people are going to realize something is amiss and that a text file ALSO has to be edited?
Nope. They'll just go on in ignorance because they'll assume--as most anyone would assume--that the game will work the way it's supposedly advertised to work.
Whoops. If anyone should be apologizing, it's the person who chose to make editing a text file necessary to make the feature work. It also begs the question, "Why?" Is the feature still in beta? Alpha? A work in progress? Is that why text files need to be edited?
That's usually why, after all.
Spawne32 - Tuesday, October 28, 2014 - link
It a shame really because the game SUCKS, but it does play well on mantle.