Yeah it has to be, a few years ago Steam had announced they have some 25 million users, it was actually very close to the individual numbers for 360 and PS3 at the time. Valve keeps their total #s and sales really close to the vest though, so it's hard to get numbers out of them unless they are announcing milestones.
I wonder what they mean by "active". Most likely it's a number of users with steam client running. Well, it runs idle for more than a year for me, yet I'm an "active" user I guess...
Worse than that, he shows the rez stats, and we have 30% at 1080p, and less than 5% at higher rez, and yet he totally ignores and blows off the 65% that are below 1080p.
He pretends they don't even exist. Must be tough looking so far down the nose at what you'd prefer not to see or notice. Amazing "accuracy" as usual, as he immediately rambles off into his personal fantasy about multiple screens "gaining"... R O F L
Still publishing Crossfire numbers as legit, despite multiple sites showing numerous runt frames which never reach the screen? This is disingenuous, to say the least.
"If you were buying new, the obvious answer would be looking at an i5-3570K on Ivy Bridge rather than the 2500K"
Ian basically wanted to get a relatively broad test suite, at as many performance points as possible. Haswell, however, is really quite a bit quicker. More than anything, this article is an introduction to how they are going to be testing moving forward, as well as a list of recommendations for different budgets.
This is incorrect. It is only competitive when you TAP out the gpu by forcing them into situations they can't handle. If you drop the res to 1080p suddenly the CPU is VERY important and they part like the red sea.
This is another attempt at covering for AMD and trying to help them sell products (you can judge whether it's intentional or not on your own). When no single card can handle the resolutions being forced on them (1440p) you end up with ALL cpu's looking like they're fine. This is just a case of every cpu saying hurry up mr. vid card I'm waiting (or we're all waiting). Lower the res to where they can handle it and cpus start to show their colors. If this article was written with 1080p being the focus (as even his own survey shows 96% of us use it OR lower, and adding in 1920x1200 you end up with 98.75%!!) you would see how badly AMD is doing vs Intel since the video cards would NOT be brick walled screaming under the load.
http://www.tomshardware.com/reviews/neverwinter-pe... An example of what happens when you put the vid card at 1080p where cpu's can show their colors. "At this point, it's pretty clear that Neverwinter needs a pretty quick processor if you want the performance of a reasonably-fast graphics card to shine through. At 1920x1080, it doesn't matter if you have a Radeon HD 7790, GeForce GTX 650 Ti, Radeon HD 7970, or GeForce GTX 680 if you're only using a mid-range Core i5 processor. All of those cards are limited by our CPU, even though it offers four cores and a pretty quick clock rate."
It's not just Civ5. I could point out how inaccurate the suggestions in this 1440p article are all day. Just start looking up cpu articles on other web sites and check the 1080p data. Most cpu articles show using a top card (7970 or 680 etc) so you get to see the TRUTH. The CPU is important in almost EVERY game, unless you shoot the resolution up so high they all score the same because your video card can't handle the job (thus making ANY cpu spend all day waiting on the vid card).
I challenge anandtech to rerun the same suite, same chips at 1080p and prove I'm wrong. I DARE YOU.
http://www.hardocp.com/article/2012/10/22/amd_fx83... More evidence of what happens when gpu is NOT tapped out. Look at how Intel is KILLING AMD at hardocp. Even if you say "but eventually I'll up my res and spend $600 on a 1440p monitor", you have to understand that as you get better gpu's that can handle that res, you'll hate the fact you chose AMD for a cpu as it will AGAIN become the limiter. "Lost Planet is still used here at HardOCP because it is one of the few gaming engines that will reach fully into our 8C/8T processors. Here we see Vishera pull off its biggest victory yet when compared to Zambezi, but still lagging behind 4 less cores from Intel."
"Again we see a new twist on the engine above, and it too will reach into our 8C/8T. While not as pronounced as Lost Planet, Lost Planet 2 engine shows off the Vishera processors advancements, yet it still trails Intel's technology by a wide margin."
"The STALKER engine shows almost as big an increase as we saw above, yet with Intel still dealing a crippling gaming blow to AMD's newest architecture." Yeah, a 65% faster Intel is a LOT right? Understand if you go AMD now, once you buy a card (20nm maxwell etc? 14nm eventually in 3yrs?) you will CRY over your cpu limiting you at even 1440p. Note the video card Hardocp use for testing was ONLY a GTX 470. That's old junk, he could now run with 7970ghz or 780gtx and up the res to 1080p and show the same results. AMD would get a shellacking.
http://techreport.com/review/24879/intel-core-i7-4... Here, techreport did it in 1080p. 20% lower for A10-5800 than 4770K in crysis 3. It gets worse with farcry 3 etc. In Far Cry 3 i4770k scored 96fps at 1080p, yet AMD's A10-5800 scored a measly 68. OUCH. So roughly 30% slower in this game. HOLY COW man check out Tomb Raider...Intel 126fps! AMD A10-5800 68fps! Does Anandtech still say this is a good cpu to go with? At the rest 98.75% of us run at YOU ARE WRONG. That's almost 2x faster in tomb raider at 1080p! Metro last light INtel 93fps, vs, AMD A10-5800 51fps again almost TWO TIMES faster!
From Ian's conclusion page here: "If I were gaming today on a single GPU, the A8-5600K (or non-K equivalent) would strike me as a price competitive choice for frame rates, as long as you are not a big Civilization V player and do not mind the single threaded performance. The A8-5600K scores within a percentage point or two across the board in single GPU frame rates with both a HD7970 and a GTX580, as well as feel the same in the OS as an equivalent Intel CPU."
He's not even talking the A10-5800 that got SMASHED at techreport as shown in the link. Note they only used a RAdeon 7950. A 7970ghz or GTX 780 would be even less taxed and show even larger CPU separations. I hope people are getting the point here. Anandtech is MISLEADING you at best by showing a resolution higher than 98.75% of us are using and tapping out the single gpu. I could post a dozen other cpu reviews showing the same results. Don't walk, RUN away from AMD if you are a gamer today (or tomorrow). Haswell boards are supposed to take a broadwell chip also, even more ammo to run from AMD.
Ian is recommending a cpu that is lower than the one I show getting KILLED here. Games might not even be playable as the A10-5800 was hitting 50fps AVG on some things. What would you hit with a lower cpu avg, and worse what would the mins be? Unplayable? Get a better CPU. You've been warned.
Hmm... If the game is fast enough 1440p then it is fast enough for 1080p... We are talking about serious players. Who on earth would buy 7970 or 580 for gaming in 1080p? That is serious overkill... We all know that Intel will run faster if we use 720p, just because it is faster CPU than AMD, nothing new in there since the era of Pentium4 and Athlon2. What this articles telss is that if you want to play games with some serious GPU power, you can save money buy using AMD CPU when using single or even in some cases double GPU. If you go beyond that the CPU becomes a bottleneck.
The killing happened at 1080p also which is what techreport showed. Since 98.75% of us run 1920x1200 or below, I'm thinking that is pretty important data.
The second you put in more than one card the cpus separate even at 1440p. Meaning, next years SINGLE card or the one after will AGAIN separate the cpus as that single card will be able to wait on the CPU as the bottleneck goes back to cpu. Putting that aside, hardocp showed even the mighty titan at $1000 had stuff turned of at 1080p. So you are incorrect. Is it serious overkill if hardocp is turning stuff off for a smooth game experience? 7970/GTX680 had to turn off even more stuff in the 780GTX review (titan and 780gtx mostly had the same stuff on, but the 7970ghz and 680gtx they compared to turned off quite a bit to remain above 30fps).
I'm a serious player, and I can't run 1920x1200 with my radeon 5850 which was $300 when I bought it. I'm hoping maxwell will get me 30fps with EVERYTHING on in a few games at 1440p (I'm planning on buying a 27 or 30in at some point) and for the ones that don't I'll play them on my Dell 24 as I do now. But the current cards (without spending a grand and even that don't work) in single format still have trouble with 1080p as hardocp etc has shown. I want my next card to at least play EVERY game at 1920x1200 on my dell, and hope for a good portion on the next monitor purchase. With the 5850 I run a lot of games on my 22in at 1680x1050 to enable everything. I don't like turning stuff down or off, as that isn't how the dev intended me to play their game right?
Apparently you think all 7970 and 580 owners are all running 1440p and up? Ridiculous. The steam survey says you are woefully incorrect. 98.75% of us are all running 1920x1200 or below and a TON of us have 7970, 680, 580 etc etc (not me yet) and enjoying the fact that they NEVER turn stuff down (well, apparently you still do on some games...see the point?). Only DUAL card owners are running above as the steam survey shows, go there and check out the breakdown. You can see the population (even as small as that 1% is...LOL) has TWO cards running above 1920x1200. So you are categorically incorrect or steam's users change all their resolutions down just to fake a survey?...ROFL. Ok. Whatever. You expect me to believe they get done with the survey and jack it up for UNDER 30fps gameplay? Ok...
Even here, at 1440p for instance, metro only ran 34fps (and last light is more taxing than 2033). How low do you think the minimums are when you're only doing 34fps AVERAGE? UNPLAYABLE. I can pull anandtech quotes that say you'd really like 60fps to NEVER dip below 30fps minimum. In that they are actually correct and other sites agree... http://www.guru3d.com/articles_pages/palit_geforce... "Frames per second Gameplay <30 FPS very limited gameplay 30-40 FPS average yet very playable 40-60 FPS good gameplay >60 FPS best possible gameplay
So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost. With 30 FPS up-to roughly 40 FPS you'll be very able to play the game with perhaps a tiny stutter at certain graphically intensive parts. Overall a very enjoyable experience. Match the best possible resolution to this result and you'll have the best possible rendering quality versus resolution, hey you want both of them to be as high as possible. When a graphics card is doing 60 FPS on average or higher then you can rest assured that the game will likely play extremely smoothly at every point in the game, turn on every possible in-game IQ setting."
So as the single 7970 (assuming ghz edition here in this 1440p article) can barely hit 34fps, by guru3d's definition it's going to STUTTER. Right? You can check max/avg/min everywhere and you'll see there is a HUGE diff between min and avg. Thus the 60fps point is assumed good to ensure above 30 min and no stutter (I'd argue higher depending on the game, mulitplayer etc as you can tank when tons of crap is going on). Guru3d puts that in EVERY gpu article.
The single 580 in this article can't even hit 24fps and that is AN AVERAGE. So unplayable totally, thus making the whole point moot right? You're going to drop to 1080p just to hit 30fps and you say this and a 7970 is overkill for 1080p? Even this FLAWED article here proves you WRONG.
Sleeping dogs right here in this review on a SINGLE 7970 UNDER 30fps AVERAGE. What planet are you playing on? If you are hitting 28.2fps avg your gameplay SUCKS!
http://www.tomshardware.com/reviews/geforce-gtx-77... Bioshock infinite 31fps on GTX 580...Umm, mins are going to stutter at 1440p right? Even the 680 only gets 37fps...You'll need to turn both down for anything fluid maxed out. Same res for Crysis 3 shows even the Titan only hitting 32fps and with DETAILS DOWN. So mins will stutter right? MSAA is low, you have two more levels above this which would put it into single digits for mins a lot. Even this low on msaa the 580 never gets above 22fps avg...LOL. You want to rethink your comments yet? The 580's avg was 18 FPS! 1440p is NOT for a SINGLE 580...LOL. Only 25fps for 7970...LOL. NOT PLAYABLE on your 7970ghz either. Clearly this game is 1080p huh? Look how much time in the graph 7970ghz spends BELOW 20fps at 1440p. Serious gamers play at 1080p unless they have two cards. FAR CRY 3, same story. 7970ghz is 29fps...ROFL. The 580 scores 21fps...You go right ahead and try to play these games at 1440p. Welcome to the stutterfest my friend. "GeForce GTX 770 and Radeon HD 7970 GHz Edition nearly track together, dipping into the mid-20 FPS range." Yeah, Far Cry will be good at 20fps.
Hitman Absolution has to disable MSAA totally...LOL. Even then 580 only hits 40fps avg.
Note the tomb raider comment at 1440p: "The GeForce GTX 770 bests Nvidia’s GeForce GTX 680, but neither card is really fluid enough to call the Ultimate Quality preset smooth." So 36fps and 39fps avg for those two is NOT SMOOTH. 770 dropped to 20fps for a while.
A titan isn't even serious overkill for 1080p. It's just good enough and for hardocp a game or two had to be turned down even on it at 1080p! The data doesn't lie. Single cards are for 1080p. How many games do I have to show you dipping into the 20's before you get it? Batman AC barely hits 30's avg on 7970ghz with 8xmsaa and you have to turn physx off (not nv physx, phsyx period). Check tom's charts for gpus.
In hardocp's review of 770gtx 1080p was barely playable with 680gtx and everything on. Upping to 2560x1600 caused nearly every card to need tessellation down and physx off in Metro Last Light. 31fps min on 770 with SSAA OFF and Physx OFF! http://hardocp.com/article/2013/05/30/msi_geforce_... You must like turning stuff off. I don't think you're a serious gamer until you turn everything on and expect it to run there. NO SACRIFICING quality! Are we done yet? If this article really tells you to pair expensive gpus ($400-1000) with a cheapo $115 AMD cpu then they are clearly misleading you. It looks like is exactly what they got you to believe. Never mind your double gpu comment paired with the same crap cpu adding to the ridiculous claims here already.
No argument there. My point wasn't that you can't find a game to run at 1440p ok. I could cite many, though I think most wouldn't be maxed out doing it on mid cards and surely aren't what most consider graphically intensive. But there are FAR too many that don't run there without turning lots of stuff off as many sites I linked to show. Also 98.75% of us don't even have monitors that go above 1920x1200 (I can't see many running NON-Native but it's possible), so not quite sure fun fact2 matters much so my statement is still correct for nearly 99% of the world right? :) There are probably a few people in here who care what the top speed of a Veyron SS is (maybe they can afford one, 258mph I think), but for the vast majority of us, we couldn't care less about it since we'll never buy a car over 100K. I probably could have said 50K and still be right for most.
Your statement kind of implies coders are lazy :) Not going to argue that point either...LOL. Not all coders are lazy mind you...But with so much power on pc's it's probably hard not to be lazy occasionally, not to mention they have the ability to patch them to death afterwards. I can handle 1-2 patches but if you need 5 just to get it to run properly after launch on most hardware (unless adding features/play balancing etc like a skyrim type game etc) maybe you should have kept it in house for another month or two of QA :) Just a thought...
You completely missed the point. The article is testing for .87% of the market. That is less than one percent. This article will be nice to reprint in 2-3yrs...Then it may actually be relevant. THAT is the point. I think it's a pretty HIGH point, not low, and that fact that you choose to ignore the data in my post doesn't make it any less valid or real. Nice try though :) Come back when you have some data actually making a relevant point please.
The comparison doesn't make sense. This is about making false claims and misrepresenting data. What does that have to do with linux? Come back when you have a decent argument about the data in question here.
who cares what most of the market has, 1440p monitors are in the 300 dollar range from the korean ebay sellers, just because a bunch of no nothings did not get the memo and get one of those better monitors and spent all their cash upgrading their cpu/gpus with their crap 1080p monitors does not mean reviews should not focus on where people SHOULD go.
1080p is a garbage resolution for large displays when you have easy and CHEAP access to 1440p. I got one of those monitors, it's beautiful. The problem is not the 4% that are higher than 1080/1200p, is the rest of you who are too cpu focused to get a better monitor.
I mean jesus people, you sit and stare at that thing ALL DAMN DAY, and people actually spend HUNDREDS of dollars on multi gpu setups and high end cpus to game at 1080p... it's submental. YOU and others need to stop complaining about a lack of focus on 1080p, and get on board the 1440p train. You don't have that? well get it, stop lagging, you are choosing an inferior setup and complaining to anandtech because they chose not to focus on your crap resolution monitor?
It's almost as if you specifically cripple your gaming resolution just so you can feel more satisfied at how much faster the intel cpus beat out the amds. Well, you're right, they do, and you still chose an inferior gaming resolution, stop living in the ghetto of the pc gaming world and move higher.
"This is another attempt at covering for AMD and trying to help them sell products... Anandtech is MISLEADING you at best by showing a resolution higher than 98.75% of us are using and tapping out the single gpu..."
You could also easily argue that the article is helping to sell Intel's 4770K, providing data that misleadingly (though not falsely) indicates the superiority of the 4770K over the 2500K/3770K group.
For the majority gamers, it is indeed misleading to focus on 1440p only. For a good number, it is also misleading to focus only on stock clocks.
As you point out, at 1080p, overclocking does help (though the benefit has to be weighed against the cost of upgraded cooling). As as others in forums have pointed out, 2700K vs. 3770K is roughly equal: with any given aftermarket cooler, a 3770K at 'Maximum Stable Overclock' will have roughly the same performance as a 2700K at 'Maximum Stable Overclock', will run hotter than a 2700K, but will consume less energy, and so on...
On the other hand, preliminary indications are that for the majority of overclockers (those who do not want to spend as much for a custom water-cooling loop as for the CPU itself), a 4770K is a worse bet, as it apparently runs hotter than even the 3770K, and the gains in 'Instructions per Clock' likely do not make up for what would thus be a reduced 'Maximum Stable Overclock.' See here: http://forums.pureoverclock.com/cpu-overclocking/2...
In short: CPU overclocking yields a tangible benefit for 1080p gamers, and for the majority of CPU Overclockers (those who do not want to spend as much for a custom water-cooling loop as for the CPU itself), the 4770K appears to be something LESS than a 3770K or 2700K.
I didn't say anything about overclocking. Maybe one of the quotes did? My statements are pure chip to chip, no overclocking discussed. Maybe you were replying to someone else?
The article is isn't helping to sell 4770k's when he says single gpu owners (98% according to steampowered survey) can play fine on a A8-5600. Single GPU owners again, according to the survey are NOT running above 1920x1200. So AMD gets killed unless you pull a stunt like anandtech did here as the benchmarks in the links I pointed to show.
I did not point out overclocking at 1080p helps. I made no statement regarding overclocking, but Intel wins that anyway.
I'm sorry if i missed this info while reading but does Haswell come with dual link DVI support? You know, so that i can drive my 1440p displays for everyday usage, since i don't game all that much.
The problem isn't Haswell, the problem is the mainboard. You would need a mainboard that supports dual-link and at least with the older generations that feature wasn't implemented. Unless the usual suspects changed that with their new offerings, you will have to use a displayport to dvi adapter to get that resolution without a dedicated card (hdmi on mainboards is usually restricted to 1080p as well, unless... see above).
I know Anandtech hasn't got to review the Richland desktop variants yet; but surely if the current recommendation is a trinity APU, surely a >10% performance increase and a lower TDP would clench it for Richland? The newly launched top end A8 6600K is £20 more than the A8 5600K.... but that's launch price.
Please, for the love of god, add a game like Crysis 3 or Far Cry 3. Your current games are all very old, and you will see a bigger difference in newer games.
Agree with request for Crysis 3. It has enough options to deliver a great visual experience and GPU beating, and it also scales well to multi-monitor resolutions for testing at extremes.
gamegpu.ru have done a lot of testing on all games with a variety of CPUs. Anandtech's choice of games actually edge cases. Once you start looking at a wider list of games (Just do a few CPUs but lots of games) you'll see a much bigger trend of performance difference especially in a lot of the non AAA titles. Around 50% of games show a preference for 3930k's at this point over a 2600k, so more multithreading is start to appear but you need to test a lot more games or you wont catch that trend and instead come to a misleading conclusion.
I am not sure that the CPU is used any more in more recent games. This is a CPU test, and testing older games that are known to be CPU dependent is a must.
Moving forward, with the next gen consoles that is, testing the absolute newest multiplatform games will be a bit more relevant. However, even Farcry 3 and Crysis 3 are mostly GPU bound, so there will be little to no difference in performance by changing the CPUs out.
I think Supreme Commander or Supreme Commander 2 would make an excellent CPU demo. Those games have been, and remain CPU limited in a way no other games are, and for good reasons (complexity, AI, unit count), rather than poor coding. A good way to do this is to record a complex 8 player game against AI and then play it back at max speed, timing the playback. That benchmark responds pretty much 1:1 with clock speed increases and also has a direct improvement effect on gameplay when dealing with large, complex battles with thousands of units on map. The upcoming Planetary Annihilation should also be a contender for this, but isn't currently in a useful state for benchmarking.
I kind of hope Planetary Annihilation will have both server and client benchmarks available, since this seems like it would be a pretty amazing platform for benchmarking.
Interesting suggestion - is SupCom2 still being updated for performance in drivers? Does playback come out with the time automatically or is it something I'll have to try and code with a batch file. Please email me with details if you would like, I've never touched SupCom2 before.
this sounds quite interesting, though I wonder if the AI is runtime bound rather than solution bound, as this could make the testing somewhat nondeterministic.
To clarify what I mean; a common method in AI programming is to let algorithms continue searching for better and better solution, interrupting the algorithm when a time limit has passed and taking the best solution it has found so far. Such approaches can result in inconsistent gameplay when pitting multiple AI units against each other, which may change the game state too much between trials to serve as a good testing platform.
Even if the AI does use this approach it may not bias the results enough to matter, so I guess the only way to be sure is to run the tests a few times and see how consistent the results are on a single test system.
Forget about SupCom2 - That game has been scaled down quite a bit compared to SupCom1 and isn't as demanding to CPUs. There's also an active SupCom1 community that has and still is pushing out community made patches. :-)
SupCom actually has a build-in benchmark that plays a scripted map with some fancy camera work. Anyone can launch this by adding "/map perftest" to your shortcut. That said, it doesn't seem to be working properly anymore after several patches nor does it actually give any useful data as the sim score is capped at 10k for today's CPUs. And yet it's extremely easy to cripple any CPU you throw at it when simply playing the game. Just open up an 81x81km map with 7 AI enemies and watch your computer slow to a crawl as the map starts filling up.
And yes, the AI is "solution bound". Replays of recorded games with AI in them wouldn't work otherwise.
I wonder if somebody could create a custom SupCom1 benchmark... *Hint Hint*
What exactly does Steam count as online? Does just having the client sit in my tray count; or do I need to be playing a steam game at the time to be counted?
Thanks for the tests, there's a lot of data points in there so that's always appreciated.
I would've liked to have seen some higher perf Nvidia solutions in there though, at the very least some Kepler parts. It looks like a lot of the higher end Intel parts hit a GPU bottleneck at the top, which is not unexpected at 1440p with last-gen Fermi parts.
What it does show for sure is, you may give pause to going beyond 2-way CF/SLI if you have to go lower than x8 on that 3rd slot. Which means you will probably have to shell out for one of the pricier boards. Hard not to recommend X79 at this point for 3-way or higher, although the lack of official PCIe 3.0 support was a red flag for me.
I went with the Gigabyte Z87x UD4 because I don't ever intend to go beyond 2-way SLI and the 3rd slot being x4 (2.0) was better than the x8/x4/x4 (3.0) config on most boards, which gives me the option to run a PhsyX card and retain x8/x8 (3.0) for my two main cards.
I haven't bothered overclocking my 2600K and I still feel it's plenty powerful. I think I may get a second GTX 670 though, Metro Last Light doesn't run all that great at 2560x1440.
So I guess the solution is to just ignore the launch to placate all those who have no interest in the launch, rather than post reviews and info about it for the ones that actually do? Doesn't make a lot of sense.
He's complaint is on the mark. Haswell is about mobile, not desktop, not gaming.
Ivy Bridge was about cost reduction, Haswell is about reducing TDP. It is shocking that a mid-range 2+ year old Sandy Bridge desktop part is still so very competitive, even though it's been superseded by two whole generations.
Intel deserves all this criticism and more. They've clearly put the interests of desktop users and gamers far onto the back burner. They're now focused on almost entirely mobile and are treading water with everything else.
Eh, how can you blame them? The pure play desktop market has been shrinking for a while now, with high performance desktop (basically gamers) even more of a niche. Maybe if they had some real competition from AMD in single threaded perf... A lot of this is just Amdahl's law at it's natural conclusion. The easy performance gains are mostly gone, so if you're Intel do you dump endless money into another 25-30% per generation, or go after the areas that haven't been well optimized yet instead? Not a hard choice to make, especially considering the market moves towards mobile & cool computing in the last decade.
Intel doesn't deserve criticism. Haswell is a small improvement over Ivy Bridge because it has become extremely difficult to optimize and already excellent processor. Do you see anything better from AMD, ARM, Oracle or others?
Is there a need to upgrade from Ivy to Haswell? No. Was it necessary to upgrade from Nethalem to Sandy Bride? No. The fact is that for most applications processors have been good enough for years and money is better spent on ssds, gpus and whatnot.
The real conclusion of this article should be that processors absolutely do not matter for gaming and that the money is better spent on speedier gpu. Processors may become relevant for the very very very few people that have extreme 2/3x extreme cards. Even a setup with 2 middle cards such as gtx 560 is not cpu dependent. I would welcome actual statistics from the number of players with 2x 3x high-end gpus. I'm quite sure the count is ultra tiny and for those people willing and able to spend thousand of dollars, do you think 100$ less on a processor is relevant?
I don't have a problem with the conclusion he comes to, complaining about dissemination of information to come to that conclusion is what makes no sense. Put all the information out there, 1, 2, 3 articles a day np, then make your own informed decision on the platform. Bemoan the fact there is actual coverage a day or two after launch and one or two reviews? Makes no sense.
many of us have a q6600 @ 3600mhz, and personally i'm very happy this and my 7870. I would still like to see a comparison of my cpu @ 3600mhz, with the modern cpus because i don't think there is a huge difference in games.
It depends what you play, any game that is CPU limited is going to be HUGE difference with that CPU. I had the same chip at 3.6GHz, which was great btw, and even when I upgraded to 920 @4GHz there was huge improvement in some games, most notably GTA4 at the time. Some other games that scale extremely well with CPU are WoW, Diablo 3, etc. just to name a few.
See my comments here...Chizow is correct, and even understating it some. There are a LOT of games cpu limited as I showed in my links. Huge differences in cpu perf from A10-5800 up to 4770k, never mind the junk Ian recommends here A8-5600 for single gpu. It just isn't correct to recommend that cpu or even A10-5800K which I showed getting smacked around in many games at 1080p. Articles like this make people think games are not cpu bound (it's far more games than Civ5). Neverwinter, Metro Last light, tomb raider, Farcry3, Crysis 3 etc etc...Once 20nm comes we may find even 1440p showing just as many limited by cpu. If rumors are true Volcanic doubles stream processors. I'm sure NV will match that. You end up gpu bound when you up the res to 1440 on single cards now, but that won't be forever and 98.75% of us according to steam don't play at 1440p (.87%) or above (1.25% total of all res above 1920x1200).
Check the 1080p data on my links (techreport was a good one as they show 1080p in most of the listed games). Toms shows neverwinter as I noted needing a very high cpu also. Hit all comments on this article, and Ctrl-F my name. Ignore my post comments and just click the links in them to prove Chizow's point (and my own). CPU is important at 1080p and 1920x1200 NOW and will be important at higher res with the next gen cards at 20nm. You will never get out of your AMD mistake if you take this article's suggestions. Well at least not without changing to an Intel board/chip...LOL. Who wants to do that? Just buy an Intel unless you're broke. Don't trust me though, read the links provided and judge for yourself how accurate anandtech is here.
I showed some games that are nearly DOUBLE on Intel vs. A10-5800K! You don't have to like the way I made my point or believe me, just check the links :) They all say the same thing. CPU is an issue just as Chizow shows in his link. You can find this in many cpu articles where they use a top gpu (usually 7970/680) and test new cpus with the oldies in there too which show large separations. Check i7-3770k or fx 8350 articles (just google those two cpu models and "review" for ample sites showing the spreak)...1080p separates the men from the boys in cpu's.
After you check the links (and chizow's), come back and agree Anandtech needs to change their ways, or tear my comments apart if I'm lying :) Future gpu's will only make our point stick out even more. CPU matters. Also note a lot of the games that are gpu limited on single cards are NOT playable anyway (check sleeping dogs right here in this article 1440p...7970 at 28fps avg is NOT playable, mins will dip to 20's or below). So you're forced back into cpu limited in a lot of cases at 1080p. Where 98.75% of us play you see cpu limits a lot.
Go back one page on Chizow's link to Skyrim's benchmark in the same article for the same data. 1080p 3770 scores 88.2 to 8350's 67.4 (that's a lot and a huge hint to how your future on AMD will look) http://www.tomshardware.com/reviews/fx-8350-visher... That's a 30% difference and an 8350FX is far faster than an A8-5600 Ian recommends here. Chizow is even more right if you toss in Ian's recommendation of an even slower cpu than 8350 vs. Intel's stuff. Even in skyrim at 1680x1050 they separate from 90fps to 68fps for 8350fx. So until you completely tap out your gpu (1440p and up which basically requires 2+ cards) you will notice if your cpu is junk or not. Since this article is only written for apparently 1.25% of the readership (or world for that matter according to steam survey), you will notice the cpu! Unless you're raising your hand as the 1.25% :) I don't call 30-100% faster marginal improvements do you? Add CIV 5 also which this site even proves in this article ;) At least they got something right.
http://www.tomshardware.com/reviews/a10-6700-a10-6... Check the toms A10-6800 review. With only a 6670D card i3-3220 STOMPS the A10-6800k with the same 6670 radeon card in 1080p in F1 2012. 68fps to 40fps is a lot right? Both chips are roughly $145. Skyrim shows 6800k well, but you need 2133memory to do it. But faster Intel cpu's will leave this in the dust with a better gpu anyway.
http://www.guru3d.com/articles_pages/amd_a10_6800k... You can say 100fps is a lot in far cry2 (it is) but you can see how a faster cpu is NOT limiting the 580 GTX here as all resolutions run faster. The i7-4770 allows GTX 580 to really stretch it's legs to 183fps, and drops to 132fps at 1920x1200. The FX 8350 however is pegged at 104 for all 4 resolutions. Even a GTX 580 is held back, never mind what you'd be doing to a 7970ghz etc. All AMD cpu's here are limiting the 580GTX while the Intel's run up the fps. Sure there are gpu limited games, but I'd rather be using the chip that runs away from slower models when this isn't the case. From what all the data shows amongst various sites, you'll be caught with your pants down a lot more than anandtech is suggesting here. Hopefully that's enough games for everyone to see it's far more than Civ5 and even with different cards affecting things. If both gpu sides double their gpu cores, we could have a real cpu shootout in many things at 1440p (and of course below this they will all spread widely even more than I've shown with many links/games).
Hey Ian, how come no Nehalem or Lynnfield data points? There are a lot of us on these platforms who are looking at this data to weigh vs. the cost of a Haswell upgrade. With the ol' 775 geezers represented it was disappointing not to see 1366 or 1156. Superb work overall however!
Sure. Though still for single GPU, it would be a wiser choice to be "realistic" and do 1080p that is more common (on single monitor average Joe gamer type of scenario). And go 1440p (or higher) for multi GPUs and enthusiast.
The purpose of the article is choosing a CPU and that needs to show some sort of scaling in near real life scenarios, but if the GPU kicks in from start it will not be possible to evaluate the CPU part of the performance equation in games.
Or maybe it would be good to show some sort of combined score from all the test, so the Civ V and other games show some differentation at last in the recommendation as well, sort of.
(1) While I understand the issue of MCT is a tricky one, I think you'd be better off just shutting it off, or if you test with it, noting the actually core speeds that your CPUs are operating at, which should be 200MHz above nominal Turbo.
(2) I don't understand the reference to an i3-3225+, as MCT should not have any effect on a dual-core chip, since it has no Turbo mode.
(3) I understand the benefit of using time demos for large-scale testing like what you're doing, but I do think you should use at least one modern game. I'd suggest replacing Metro2033, which has incredibly low fps results due to a lack of engine optimization, with Tomb Raider, which has a very simple, quick, and consistent built-in benchmark.
Thanks for all your hard work to add to the body of knowledge on CPUs and gaming.
(1) Unfortunately for a lot of users, even DIY not just system integrators, they leave the motherboard untouched (even at default memory, not XMP). So choosing that motherboard with MCT might make a difference in performance. Motherboards without MCT are also different between themselves, depending on how quickly they respond to CPU loading and ramp up the speed, and then if they push it back down to idle immediately in a low period or keep the high turbo for a few seconds in case the CPU loading kicks back in.
2) This is a typo - I was adding too many + CPU results at the same time and got carried away.
3) While people have requested more 'modern' games, there are a couple of issues. If I release something that has just come out, the older drivers I have to use for consistency will either perform poorly or not scale (case in point, Sleeping Dogs on Catalyst 12.3). If I am then locked into those drivers for a year, users will complain that this review uses old drivers that don't have the latest performance increases (such as 8% a month for new titles not optimized) and that my FPS numbers are unbalanced. That being said, I am looking at what to do for 2014 and games - it has been suggested that I put in Bioshock Infinite and Tomb Raider, perhaps cut one or two. If there are any suggestions, please email me with thoughts. I still have to keep the benchmarks regular and have to run without attention (timedemos with AI are great), otherwise other reviews will end up being neglected. Doing this sort of testing could easily be a full time job, which in my case should be on motherboards and this was something extra I thought would be a good exercise.
It is sad to poor journalism in the form of excuses in an otherwise excellent article. :-/
1. Any review sites that make excuses for why they ignore FCAT just highlights that they don't _really_ understand the importance of _accurate_ frame stats. 2. Us hardcore games can _easily_ tell the difference betwen 60 Hz and 30 Hz. I bought a Titan to play games at 1080p @ 100+ Hz on the Asus VG248QE using nVidia's LightBoost to eliminate ghosting. You do your readers a dis-service by again not understand the issue. 3. Focusing on 1440 is largely useless as it means people can't directly compare how their Real-World (tm) system compares to the benchmarks. 4. If your benchmarks are not _exactly_ reproducible across multiple systems you are doing it wrong. Name & Shame games that don't allow gamers to run benchmarks. Use "standard" cut-scenes for _consistency_.
It is sad to see the quality of a "tech" article gloss and trivial important details.
Judging by your excellent command of English, I don't think you could identify a decent technical article if it slapped you upside the head and banged your sister.
There is a reason Tom's Hardware, Hard OCP, guru3d, etc. uses FCAT.
I feel sad that you and AnandTech tech writers are to stupid to understand the importance of high frame rates (100 Hz vs 60 Hz vs 30 Hz), frame time variance, 99 percentile, proper CPU-GPU load balancing, and micro stuttering. One of these days when you learn how to spell 'ad hominem' you might actually have something _constructive_ to add to the discussion. Shooting the messenger instead of focusing on the message shows you are still a immature little shit that doesn't know anything about GPUs.
Ignoring the issue (no matter how badly communicated) doesn't make it go away.
What are _you_ doing to help raise awareness about sloppy journalism?
The data collected in this article is likely a week or two old. Richland was not available at that time. It takes an extremely long time to do this kind of testing.
Richland was launched today. Haswell was launched two days ago. Neither CPU was available two weeks ago. It all depends on review units being released to review websites. Either Richland was left out because it wasn't different enough from Trinity to matter or AMD did not hand out review units.
Great work Ian! Definitely waiting to see i5-3570K added into the mix, to see how it compares to the i5-2500k (and the 3570k beeing more futureproof thanks to PCIe 3.0).
As always, thanks for the great article and hard work Ian.
I'd really like to see how a few of the tests scale with overclocked CPU's, notably those in which the sandy bridge processors were competitive with ivy bridge and haswell parts. Obviously overclocking introduces a lot of variables into your testing, but it would be very interesting to see a few of the popular choices tested (sandy bridge parts @ 4.5 are quite common, and many users on such a system were waiting for haswell before they upgrade).
Interesting results, but very limited as well. Why test at a resolution used by only 4% of the players?
I would have rather seen the results at 1080p, over a wider variety of games. Especially RTS games and newer games like crysis 3, FC3, and Tomb Raider. I tested Heart of the Swarm on my computer with a HD7770 and i5 2320 and was able to max out the cpu in a 10 player skirmish match at ultra, 1080p. So I am sure an A8-5600 would be limiting in that case.
Even considering the results only of the games tested, the A8-5600k seems a strange choice. The i3 seems just as valid, considering it is equal or faster in every game but one, while using less power.
Thank you, for your time, effort, and energy in compiling an encyclopedic database on the effects of cpu on single and multi gpu configurations, in alternate gaming/engine scenarios. Your work is insightful, informative, and wholly devoted to the science of benchmarking. This approach has helped me, as a relatively new computer enthusiast, to more deeply understand testing methodology in the computing field.
I am interested in the pure CPU benchmarks of Starcraft 2 with the 4770k and 4670k. I understand this game is not optimized, is directx9, and is extremely cpu limited with only 2 maximum cores active, and thus not in top priority for providing benchmarks. Will haswell be added to the benchmarking database for sc2?
Ian, I have to say (again) that i7-3820 should be in this review. You say that i7-4770K is a better value proposition than Sandy Bridge-E (X79), I assume because you are only thinking of the expensive 6 core X79 CPU's. That changes if you do consider i7-3820.
X79 brings far better support for multi-gpu setups with enough PCIe lanes to feed multiple cards quite happily. No PLX needed. Pair that with an i7 3820 (cheaper than i7-3770K/i7-4770K) and you may find the performance surprisingly good for the price.
I considered the 3820 numerous times (it's cheap at MC, same price as high-end 3770K/4770K) but I shy away because it inexplicably performs *WORST* than 2700K/3770K/4770K. I don't know why, it has more L3 cache, and is clocked higher before/after boost. Just an oddball chip.
Besides, X79 as a platform was dated almost as soon as it released. PCIe 3.0 support is spotty with Nvidia (reg hack not guaranteed), no native USB 3.0 and no full SATA 6G support. I went for Z87 + 4770K instead because X79 + 3820 didn't offer any noticeable advantages while carrying a significant higher price tag (board price).
So if you take out the 1920x1200 from the steam survey (4.16 - 2.91% right?), you've written an article for ~1.25% of the world. Thanks...I always like to read about the 1% which means absolutely nothing to me and well, 98.75% of the world.
WHO CARES? As hardocp showed even a Titan still can't turn on EVERY detail at even 1920x1080. I would think your main audience is the 99% with under $1000 for a video card (or worse for multigpu) and another $600-900 for a decent 1440p monitor you don't have to EBAY from some dude in Korea.
Whatever...The midpoint to you is a decimal point of users (your res is .87%, meaning NOT ONE PERCENT & far less have above that so how is that midpoint? I thought you passed MATH)?...Quit wasting time on this crap and give us FCAT data like pcper etc (who seems to be able to get fcat results into EVERY video card release article they write).
"What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p. If that applies to all of the 4.6 million gamers currently on steam, we are talking about ~200,000 individuals with setups bigger than 1080p playing games on Steam right now, who may or may not have to run at a lower resolution to get frame rates."
That really should read ~55,000 if you take away the 2.91% that run 1920x1200. And your gaming rig is 1080p because unless you have a titan (which still has problems turning it all on MAX according to hardocp etc to remain playable) you need TWO vid cards to pull off higher than 1920x1200 without turning off details constantly. If you wanted to game on your "Korean ebay special" you would (as if I'd ever give my CC# to some DUDE in a foreign country as Ryan suggested in the 660TI comment section to me, ugh). It's simply a plug change to game then a plug change back right? Too difficult for a Doctor I guess? ;)
This article needs to be written in 3 years maybe with 14nm gpus where we might be able to run a single gpu that can turn it all on max and play above 30fps while doing it and that will still be top rung, as I really doubt maxwell will do this, I'm sure they will still be turning stuff off or down to stay above 30fps min, just as Titan has to do it for 1080p now. Raise your hand if you think a $500 maxwell card will be 2x faster than titan.
1440p yields an overall pixel count of 3,686,400 pixels for a monitor in 1440p resolution, substantially higher than the 2,073,600 pixels found on a 1080p monitor/tv etc. So since Titan is SHORT of playing ALL games maxed on 1080p we would need ~2x the power at say $500 for it to be even called anywhere NEAR mainstream at 1440p right? I don't see NV's $500 range doing 2x Titan with maxwell and that is 6-9 months away (6 for AMD volcanic, ~7-9 for NV?). Raise your hand if you call $500 mainstream...I see no hands. They may do this at 14nm for $300 but this is a long ways off right and most call $200 mainstream right? Hence I say write this in another 3yrs when the 1080p number of users in the steam survey (~31%) is actually the 1440p#. Quit writing for .87% please and quit covering for AMD with FCAT excuses. We get new ones from this site with every gpu article. The drivers changed, some snafu that invalidated all our data, not useful for this article blah blah, while everyone else seems to be able to avoid all anandtech's issues with FCAT and produce FCAT after FCAT results. Odd you are the ONLY site AMD talked too directly (which even Hilbert at Guru3d mentions...rofl). Ok, correction. IT'S NOT ODD. AMD personal attention to website=no fcat results until prototype/driver issues are fixed....simple math.
http://www.alexa.com/siteinfo/anandtech.com# Judging your 6 month traffic stats I'd say you'd better start writing REAL articles without slants before your traffic slides to nothing. How much more of a drop in traffic can you guys afford before you switch off the AMD love? Click the traffic stats tab. You have to be seeing this right Anand? Your traffic shows nearly in half since ~9 months ago and the 660TI stuff. :) I hope this site fixes it's direction before Volcanic & Maxwell articles. I might have to start a blog just to pick the results of those two apart along with very detailed history of the previous articles and comments sections on them. All in one spot for someone to take in at once I'm sure many would be able to do the math themselves and draw some startling conclusions about the last year on this site and how it's changed. I can't wait for Ryan's take on the 20nm chips :)
Who actually buys a computer and does nothing but game on it every second they are on it? That's why the A8-5600k should not be the recommended cpu. Just gonna drag you down in every other thing you do with the computer. The i5-2500k should be here too. You can get them for a STEAL on ebay used I've seen them go for around 140-150. Sure you can pay 100-110 on ebay for the a8-5600k is a 40 dollar savings worth that much performance loss?
I didn't even go into this aspect (it's not just about gaming as you say clearly). But thanks for making the other 1/2 of my argument for me :)
Your statement plus mine makes this whole article & it's conclusions ridiculous. Most people buy a PC and keep it for over 3yrs, meaning you'll be punished for a LONG time every day in everything you do (gaming, ripping, rar, photos etc etc). AMD cpu's currently suck for anyone but very poor people. Even for the poor, I'd say save for another month or two as $50-100 changes the world for years for your computing no matter what you'll use it for. Or axe your vid card for now and by a higher end intel. Survive for a bit until you can afford a card to go into your machine. AMD just isn't worth it for now on desktops. I'm an AMD fan, but the computing experience on Intel today is just better all around if you ever intend on putting in a discrete card worth over say $100 and this comment only gets worse as gpu's improve leaving your cpu behind.
You will get more cpu limited every year. Also it's much easier to change gpu's vs cpu's (which usually requires a new board for substantial gains unless you really buy on the low-end). Having said that, buying low-end haswell today gets you a broadwell upgrade later which should yield some decent gains since it's 14nm. Intel is just hard to argue against currently and that is unfortunate for AMD since the bulk of their losses is CPU related and looks to just get worse (the gpu division actually made ~15mil or so, while cpu side lost 1.18B!). Richland changes nothing here, just keeps the same audience it already had for total losses. They need a WINNER to get out of losses. Consoles may slow the bleeding some, but won't fix the losses. Steamroller better be 30-40% faster (10-20% is not enough, it will again change nothing).
Quote: What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p. If that applies to all of the 4.6 million gamers currently on steam, we are talking about ~200,000 individuals with setups bigger than 1080p playing games on Steam right now, who may or may not have to run at a lower resolution to get frame rates.
Wrong. 2.91% is 1200p (1080p at a 16:10 ratio), which is barely higher resolution. 1.25% are truly above 1440p, a much smaller number. ~57 000 gamers compared to 1,380,000 gamers... I respect 1440p, getting a new system to play at that res, but the mainstream isn't any time soon.
I wish I could take this article seriously. You choose 4 games to recommend a CPU (Metro 2033, GPU Bound, Dirt 3, racing game focused on graphics, Civ V, which you knock off as unimportant based on FPS not turn times (which is all anyone really cares about in the late-game) and Sleeping Dogs, which is Open World but doesnt have complex scripting or AI.) and then choose AMD based on 3/4 of the games which are GPU bound and thus not favoring the faster Intel CPU's much?
FPS will only get you so far. Smoothness will be better on the faster CPU's. Anyway, most importantly, if you want to have a serious article with a good recommendation, how about testing CPU bound modern games? Shogun 2, mass AI calculations for many units combined with complex turn times (which is very important in any turn based game). Skyrim, with actually complex AI and large amounts of scripting, which uses the CPU to its utmost. Crysis 3, a good test for a balance of CPU and GPU focus. BF3 Multiplayer, which from personal experience needs a good CPU to play well.
Use Nvidia and AMD GPU's, one could favor the other leading to a better recommendation (This brand for this CPU). Civ V will see large performance gains on a Nvidia card combined with a good CPU, due to its use of deferred contexts (dx11 multithreading) and Nvidia's support of it (AMD seriously needs to step up and support it, most game engines aren't because AMD isn't. Its built into DX11, so support it AMD!).
Lastly, recommend for the mainstream. 1080p is the mainstream. Not 1440p+, which is 1.25% of steam players, 1080, which is more than 30%.
I wonder what's the meaning of conducting such a big effort like this to test CPU performances and then making all the systems GPU bottlenecked just to take into consideration 4% of the gaming population. Moreover, some test done with an "old" GTX580 which bottlenecks in those resolution quite soon.
I renew my request of updating the list of games used and using most "popular" video settings in order to make a real comparison of what a gamer may find using the usual setup it may use at home. Monitor bigger than 24" are not popular at all. Maybe an integration with a SLI/Tri SLI setup and a 5800x resolution may be added, but surely that should not be considered the way things work normally and taken a sdefinitive benchmark results to get some obviously confusing conclusions. An A10-xxxx is way way behind any i5 CPU, and often even behind some i3 in realgaming. I can't really understand how one can believe in such a suggestion. I am starting to think that something else rather than objective results are being created and shown here.
AMD only visited ONE website in recent history. ANANDTECH.
Also note they pushed this 1440p idea when the numbers were EVEN WORSE in the 660TI article comments section (and even the articles conclusions, we're talking 9 months ago - 1440p is STILL not popular nor above it). See Ryan's exchange in that article with me. He was pushing the Korean Ebay dude then...ROFL. I pointed out then that amazon only had 2 people selling them and they had no reviews (ONE, which was likely the guy that owned the place selling it), no support page, no phone, and their website wasn't even their own domain and email was a gmail address if memory serves. Essentially giving your CC# to some dude in Korea and praying. Which another site mentioned he did pray when ordering a test unit...LOL Techreport's 1440p korean review back then if memory serves. Yet Ryan claimed everyone in the forums was doing this...Whatever... Don't even get me started on Jared's personal attack while ignoring my copious amounts of data proving Ryan's article BS even with using Ryan's own previous article's benchmarks! It's kind of hard to argue against your own data right?
I sincerely hope this site goes back to producing articles on cpu/gpu that are worthy of reading. These days all they do is hide AMD's inadequacies vs. Intel and NV. They are the only site saying things like "buy an A8-5600 for any SINGLE gpu machines"...I can't believe how far they've gone in the last 9 months. Their traffic stats show I'm not alone. The comments here show I'm not alone. AMD can't be paying them enough to throw their whole reputation down the drain. Look what the Sysmark/Bapco/Van Smith scandal did to Tomshardware (Tom even changed all his bylines to "tom's staff" or some crap like that). He had to sell at far less than the site was worth before the damage, and it took years to get back to a better reputation and wash off the stink. Heck I stopped reading in disgust for years and many IT friends did the same. I mean they were running Intel ads in AMD review articles...LOL. I think that is just wrong (the van smith stuff was just unconscionable). For those who remember Van, he still writes occasionally at brightsideofnews.com (I only recently discovered this, also writes on vanshardware but not much analysis stuff). Good to see that.
Having read technical articles, white papers and tech reviews for over 25 years I can't remember ever reading a "finding perfection" examination. My hypothesis is, does there exist a CPU(all CPU's tested) to GPU(all OEM's tested) mix that is ideal. Obviously speed is king so I am thinking more from an engineering perspective. Does this exist?
Steam and EA online are both great services. If there is a service that takes away physical media it's a huge winner to me. I still have my piles Sierra game boxes stored away.
Here is a youtube link showing 3dmark11 and windows index rating for the 4770k 3.5ghz Haswell. Not overclocked. This is apparently around 10-20fps slower than the 6800k in most games. And almost twice the price!! Youtube link: http://www.youtube.com/watch?v=k7Yo2A__1Xw
Quote:" The only way to go onto 3-way or 4-way SLI is via a PLX 8747 enabled motherboard, which greatly enhances the cost of a motherboard build. This should be kept in mind when dealing with the final results."
The only way? X79 supports up to 4 8X channels of PCie 2/3. The 4-core 3820 overclocks readily and on a X79 board is a very small cost enhancement over a high-end non-PLX8747 1155-socket setup. Plus the upgrade benefit of stepping up to the 6-core 3930K if one wants to combine usage for professional multicore applications with gaming.
"What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p."
So an article and benches are provided for the benefit of 4.16% of the gamers who might be running more pixels vs the 65% (almost 3 million) lions share of gamers that must be running at fewer pixels than found at 1080p. Very strange.
Just to point out the blindingly obvious but who would spend big $$$ on a 1440p monitor and a top end gpu and then buy a low end budget cpu (A8-5600)...
The realistic min recommendation is going to be a i3570K.
So, how would a 955BE perform compared to the CPUs on the test? From what I understand, I should just keep this CPU, as a new one is not going to make much of a difference?
Thank you for doing all this work. A great follow-up to the original!
Could you please correct some charts on the CPU Benchmarks page, though? The "Video Conversion - x264 HD Benchmark" section is displaying the charts for the "Grid Solvers - Explicit Finite Difference" section.
Frankly not best article. Resolution too high for GPU and then recommending CPU based on it. CPU, which will not provide performance needed for games. (Techreport showed that APU is not good idea when paired with real GPU; FPS might be in range, but latency is in hell)
Ian, I'm afraid I have to agree with some of the naysayers here. You've tried so hard to have clean *scientific* analysis that you've failed to see the wood for the trees. In actual fact I fear you've reached the opposite of a scientific conclusion *because* you only focussed on easily obtainable/reproducible results.
Just because results for modern games are hard to obtain, doesn't mean you can ignore them despite it being a hard path to walk. I have 1440p but agree that it's not relevant to the vast majority and anyone affording a 1440p monitor won't care to save $40 on AMD A8 vs core i5. So you have to be *realistic* (as well as scientific).
I know from a few years of international finance analysis that when doing an independent study, there is a chance you can come to a conclusion that flies in the face of the market or common opinion. You have to be *SO* careful when this happens and quadruple check what you have ended up with because 99% of the time, the market or 'hive mind' is correct and there is an error or misunderstanding in your own work. After all, the conglomerate conclusion of hundreds of often intelligent people is hardly likely to wrong, even if you are a smart guy. The chance that you have found the truth and that everyone else is wrong really is about 1% (yes it does happen but it is a once in a blue moon type of event).
It might seem a huge hit to admit that much of your hard work was misdirected but it could save more pain in the long run to go back to the drawing board and consider what you are trying to achieve and how best to go about it. A very small sample of older titles at unpopular resolutions really could skew results to be misleading.
I agree. However we have still to understand what was the thesis Ian wanted to demonstrate. If it was "AMD CPU don't have to appear so bad vs Intel" the strategy used for the demonstration is quite good. On the other hand, if it was "Let's see which is the best CPU for playing games" the strategy is a complete fail. And it still is partially the same if it were "Let's see which is the cheapest CPU to cope with a bottlenecked GPU", as those old games, but Civ5, all do not have any complex AI o scripts which are a CPU intensive task . If I were to judge this work as a homework I would evaluate it as F because it is intended for a small part of the market, using old benchmarks not valid today, incomplete (lack of FCAT) with a wrong setup (bottlenecking GPUs to evaluate CPU performances?). Wrong on all aspects but, unless said, the intent was to show that AMD CPU are just trailing Intel most expensive ones instead of being a complete generation behind. In this case evaluation can be a B, but becomes quite limited if we look at the represented market (is 3% of a market that is capable of spending well more that an average gamers a good target to demonstrate that they can spare few bucks using an otherwise castrated CPU?)
For all these reasons I may say that this is one of the worst article I have ever read on this site. It show some incompetence or worse a bias.
This article is irrelevant to 95+% of people. What was the point in this? I don't give a rats ass what will be in 3-5 years, I want to know performance numbers for using a setup with realistic numbers of TODAY.
While I appreciate the time and effort you put into this, I have to agree with those who call out 1440p's irrelevance for your readers. I think if we tested at sane resolutions, we'd find that a low-end cpu, like a G2120, coupled with a mid-to-high range GPU, would yield VERY playable framerates at 1080p. I'd love to see some of the older Core 2 Duos up against the likes of a G2120, i3-3220/5, on up to i5-3570 and higher with a high end GPU and 1080p res. That would be very useful info for your readers and could save many of them lots of money. In fact, wouldn't you rather put your hard-earned money into a better GPU if you knew that you could save $200 on the cpu? I'm hinting that I believe (without seeing actual numbers) that a G2120+high end GPU would perform virtually identically in gaming to a $300+ cpu with the same graphics accelerator, at 1080p. Sure, you'd see see greater variation between the cpus at 1080p, but when we're testing cpus, don't we WANT that?
Some people dont really know what they are reading...apparently!!
The fact that in every single review someone says anandtech is being paid by someone is actually a good thing. I mean, a month ago a bunch of people said they are trying to sell Intel cpus, and now we have people saying the same shit about AMD.
Furthermore, the whole benchmark is based around 1440p! Calling it bullshit because it is a small niche that has such a monitor is stupid. No body has Titan either, should they not benchmark it? No one runs quad sli either and so on.
Even the guy that flamed Ian admitted that the benchmark bottlenecks the CPU so it makes AMD look better. WELL THATS THE FUCKING POINT. Amd LOOKS better cause it fucking is, taking into consideration that, as long as you have a single card, YOU DONT FUCKING NEED ANY BETTER CPU. That what the review pointed.
All the benchmark and Ian's reccomendation was, that, for 1440p and one video card, since the gpu is already bottlenecking the cpu, get the cheapest you can, which in this case is amd's A8. I mean, why in fucking hell would I want an i7 on 10 ghz if it is left idle scratching balls cause of the GPU? I
I wonder what they mean by "active". Most likely it's a number of users with steam client running. Well, it runs idle for more than a year for me, yet I'm an "active" user I guess... https://showbizclan.com/japanese-comedy-movies/
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
116 Comments
Back to Article
jabber - Tuesday, June 4, 2013 - link
4.6 million on Steam? Is that basically the current total subscriber level?Wow I knew gamers were a minority but that's scary. Okay I know not all gamers are on Steam but...
Amazed that companies even bother for so few. Say it isn't so!
jeffkibuule - Tuesday, June 4, 2013 - link
Pretty sure that's current ACTIVE users.jabber - Tuesday, June 4, 2013 - link
I'd hope so.chizow - Tuesday, June 4, 2013 - link
Yeah it has to be, a few years ago Steam had announced they have some 25 million users, it was actually very close to the individual numbers for 360 and PS3 at the time. Valve keeps their total #s and sales really close to the vest though, so it's hard to get numbers out of them unless they are announcing milestones.Rattlepiece - Tuesday, June 4, 2013 - link
4.6 million was the current amount of users online when the article was written. http://store.steampowered.com/stats/Steam has more than 55 million active users.
medi02 - Wednesday, June 5, 2013 - link
I wonder what they mean by "active".Most likely it's a number of users with steam client running.
Well, it runs idle for more than a year for me, yet I'm an "active" user I guess...
UltraTech79 - Saturday, June 22, 2013 - link
Why the hell are you running steam idle for over a year and not using it then?FlushedBubblyJock - Monday, April 6, 2015 - link
Worse than that, he shows the rez stats, and we have 30% at 1080p, and less than 5% at higher rez, and yet he totally ignores and blows off the 65% that are below 1080p.He pretends they don't even exist. Must be tough looking so far down the nose at what you'd prefer not to see or notice.
Amazing "accuracy" as usual, as he immediately rambles off into his personal fantasy about multiple screens "gaining"...
R O F L
Please apply for politics or news.
trajan2448 - Tuesday, June 4, 2013 - link
Still publishing Crossfire numbers as legit, despite multiple sites showing numerous runt frames which never reach the screen? This is disingenuous, to say the least.dsumanik - Tuesday, June 4, 2013 - link
What's more disengenuous is the haswell review. Glowing review of an incremental more of the same from intel.This article actually recommends a 2500 k.
That says it all!
ninjaquick - Tuesday, June 4, 2013 - link
"If you were buying new, the obvious answer would be looking at an i5-3570K on Ivy Bridge rather than the 2500K"Ian basically wanted to get a relatively broad test suite, at as many performance points as possible. Haswell, however, is really quite a bit quicker. More than anything, this article is an introduction to how they are going to be testing moving forward, as well as a list of recommendations for different budgets.
dsumanik - Tuesday, June 4, 2013 - link
2 year old mid range tech is competitive with, and cheaper, than haswell.Hence anandtech's recommendation.
The best thing about haswell is the motherboards, which are damn nice.
TheJian - Wednesday, June 5, 2013 - link
This is incorrect. It is only competitive when you TAP out the gpu by forcing them into situations they can't handle. If you drop the res to 1080p suddenly the CPU is VERY important and they part like the red sea.This is another attempt at covering for AMD and trying to help them sell products (you can judge whether it's intentional or not on your own). When no single card can handle the resolutions being forced on them (1440p) you end up with ALL cpu's looking like they're fine. This is just a case of every cpu saying hurry up mr. vid card I'm waiting (or we're all waiting). Lower the res to where they can handle it and cpus start to show their colors. If this article was written with 1080p being the focus (as even his own survey shows 96% of us use it OR lower, and adding in 1920x1200 you end up with 98.75%!!) you would see how badly AMD is doing vs Intel since the video cards would NOT be brick walled screaming under the load.
http://www.tomshardware.com/reviews/neverwinter-pe...
An example of what happens when you put the vid card at 1080p where cpu's can show their colors.
"At this point, it's pretty clear that Neverwinter needs a pretty quick processor if you want the performance of a reasonably-fast graphics card to shine through. At 1920x1080, it doesn't matter if you have a Radeon HD 7790, GeForce GTX 650 Ti, Radeon HD 7970, or GeForce GTX 680 if you're only using a mid-range Core i5 processor. All of those cards are limited by our CPU, even though it offers four cores and a pretty quick clock rate."
It's not just Civ5. I could point out how inaccurate the suggestions in this 1440p article are all day. Just start looking up cpu articles on other web sites and check the 1080p data. Most cpu articles show using a top card (7970 or 680 etc) so you get to see the TRUTH. The CPU is important in almost EVERY game, unless you shoot the resolution up so high they all score the same because your video card can't handle the job (thus making ANY cpu spend all day waiting on the vid card).
I challenge anandtech to rerun the same suite, same chips at 1080p and prove I'm wrong. I DARE YOU.
http://www.hardocp.com/article/2012/10/22/amd_fx83...
More evidence of what happens when gpu is NOT tapped out. Look at how Intel is KILLING AMD at hardocp. Even if you say "but eventually I'll up my res and spend $600 on a 1440p monitor", you have to understand that as you get better gpu's that can handle that res, you'll hate the fact you chose AMD for a cpu as it will AGAIN become the limiter.
"Lost Planet is still used here at HardOCP because it is one of the few gaming engines that will reach fully into our 8C/8T processors. Here we see Vishera pull off its biggest victory yet when compared to Zambezi, but still lagging behind 4 less cores from Intel."
"Again we see a new twist on the engine above, and it too will reach into our 8C/8T. While not as pronounced as Lost Planet, Lost Planet 2 engine shows off the Vishera processors advancements, yet it still trails Intel's technology by a wide margin."
"The STALKER engine shows almost as big an increase as we saw above, yet with Intel still dealing a crippling gaming blow to AMD's newest architecture."
Yeah, a 65% faster Intel is a LOT right? Understand if you go AMD now, once you buy a card (20nm maxwell etc? 14nm eventually in 3yrs?) you will CRY over your cpu limiting you at even 1440p. Note the video card Hardocp use for testing was ONLY a GTX 470. That's old junk, he could now run with 7970ghz or 780gtx and up the res to 1080p and show the same results. AMD would get a shellacking.
http://techreport.com/review/24879/intel-core-i7-4...
Here, techreport did it in 1080p. 20% lower for A10-5800 than 4770K in crysis 3. It gets worse with farcry 3 etc. In Far Cry 3 i4770k scored 96fps at 1080p, yet AMD's A10-5800 scored a measly 68. OUCH. So roughly 30% slower in this game. HOLY COW man check out Tomb Raider...Intel 126fps! AMD A10-5800 68fps! Does Anandtech still say this is a good cpu to go with? At the rest 98.75% of us run at YOU ARE WRONG. That's almost 2x faster in tomb raider at 1080p! Metro last light INtel 93fps, vs, AMD A10-5800 51fps again almost TWO TIMES faster!
From Ian's conclusion page here:
"If I were gaming today on a single GPU, the A8-5600K (or non-K equivalent) would strike me as a price competitive choice for frame rates, as long as you are not a big Civilization V player and do not mind the single threaded performance. The A8-5600K scores within a percentage point or two across the board in single GPU frame rates with both a HD7970 and a GTX580, as well as feel the same in the OS as an equivalent Intel CPU."
He's not even talking the A10-5800 that got SMASHED at techreport as shown in the link. Note they only used a RAdeon 7950. A 7970ghz or GTX 780 would be even less taxed and show even larger CPU separations. I hope people are getting the point here. Anandtech is MISLEADING you at best by showing a resolution higher than 98.75% of us are using and tapping out the single gpu. I could post a dozen other cpu reviews showing the same results. Don't walk, RUN away from AMD if you are a gamer today (or tomorrow). Haswell boards are supposed to take a broadwell chip also, even more ammo to run from AMD.
Ian is recommending a cpu that is lower than the one I show getting KILLED here. Games might not even be playable as the A10-5800 was hitting 50fps AVG on some things. What would you hit with a lower cpu avg, and worse what would the mins be? Unplayable? Get a better CPU. You've been warned.
haukionkannel - Wednesday, June 5, 2013 - link
Hmm... If the game is fast enough 1440p then it is fast enough for 1080p... We are talking about serious players. Who on earth would buy 7970 or 580 for gaming in 1080p? That is serious overkill...We all know that Intel will run faster if we use 720p, just because it is faster CPU than AMD, nothing new in there since the era of Pentium4 and Athlon2. What this articles telss is that if you want to play games with some serious GPU power, you can save money buy using AMD CPU when using single or even in some cases double GPU. If you go beyond that the CPU becomes a bottleneck.
TheJian - Thursday, June 6, 2013 - link
The killing happened at 1080p also which is what techreport showed. Since 98.75% of us run 1920x1200 or below, I'm thinking that is pretty important data.The second you put in more than one card the cpus separate even at 1440p. Meaning, next years SINGLE card or the one after will AGAIN separate the cpus as that single card will be able to wait on the CPU as the bottleneck goes back to cpu. Putting that aside, hardocp showed even the mighty titan at $1000 had stuff turned of at 1080p. So you are incorrect. Is it serious overkill if hardocp is turning stuff off for a smooth game experience? 7970/GTX680 had to turn off even more stuff in the 780GTX review (titan and 780gtx mostly had the same stuff on, but the 7970ghz and 680gtx they compared to turned off quite a bit to remain above 30fps).
I'm a serious player, and I can't run 1920x1200 with my radeon 5850 which was $300 when I bought it. I'm hoping maxwell will get me 30fps with EVERYTHING on in a few games at 1440p (I'm planning on buying a 27 or 30in at some point) and for the ones that don't I'll play them on my Dell 24 as I do now. But the current cards (without spending a grand and even that don't work) in single format still have trouble with 1080p as hardocp etc has shown. I want my next card to at least play EVERY game at 1920x1200 on my dell, and hope for a good portion on the next monitor purchase. With the 5850 I run a lot of games on my 22in at 1680x1050 to enable everything. I don't like turning stuff down or off, as that isn't how the dev intended me to play their game right?
Apparently you think all 7970 and 580 owners are all running 1440p and up? Ridiculous. The steam survey says you are woefully incorrect. 98.75% of us are all running 1920x1200 or below and a TON of us have 7970, 680, 580 etc etc (not me yet) and enjoying the fact that they NEVER turn stuff down (well, apparently you still do on some games...see the point?). Only DUAL card owners are running above as the steam survey shows, go there and check out the breakdown. You can see the population (even as small as that 1% is...LOL) has TWO cards running above 1920x1200. So you are categorically incorrect or steam's users change all their resolutions down just to fake a survey?...ROFL. Ok. Whatever. You expect me to believe they get done with the survey and jack it up for UNDER 30fps gameplay? Ok...
Even here, at 1440p for instance, metro only ran 34fps (and last light is more taxing than 2033). How low do you think the minimums are when you're only doing 34fps AVERAGE? UNPLAYABLE. I can pull anandtech quotes that say you'd really like 60fps to NEVER dip below 30fps minimum. In that they are actually correct and other sites agree...
http://www.guru3d.com/articles_pages/palit_geforce...
"Frames per second Gameplay
<30 FPS very limited gameplay
30-40 FPS average yet very playable
40-60 FPS good gameplay
>60 FPS best possible gameplay
So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost.
With 30 FPS up-to roughly 40 FPS you'll be very able to play the game with perhaps a tiny stutter at certain graphically intensive parts. Overall a very enjoyable experience. Match the best possible resolution to this result and you'll have the best possible rendering quality versus resolution, hey you want both of them to be as high as possible.
When a graphics card is doing 60 FPS on average or higher then you can rest assured that the game will likely play extremely smoothly at every point in the game, turn on every possible in-game IQ setting."
So as the single 7970 (assuming ghz edition here in this 1440p article) can barely hit 34fps, by guru3d's definition it's going to STUTTER. Right? You can check max/avg/min everywhere and you'll see there is a HUGE diff between min and avg. Thus the 60fps point is assumed good to ensure above 30 min and no stutter (I'd argue higher depending on the game, mulitplayer etc as you can tank when tons of crap is going on). Guru3d puts that in EVERY gpu article.
The single 580 in this article can't even hit 24fps and that is AN AVERAGE. So unplayable totally, thus making the whole point moot right? You're going to drop to 1080p just to hit 30fps and you say this and a 7970 is overkill for 1080p? Even this FLAWED article here proves you WRONG.
Sleeping dogs right here in this review on a SINGLE 7970 UNDER 30fps AVERAGE. What planet are you playing on? If you are hitting 28.2fps avg your gameplay SUCKS!
http://www.tomshardware.com/reviews/geforce-gtx-77...
Bioshock infinite 31fps on GTX 580...Umm, mins are going to stutter at 1440p right? Even the 680 only gets 37fps...You'll need to turn both down for anything fluid maxed out. Same res for Crysis 3 shows even the Titan only hitting 32fps and with DETAILS DOWN. So mins will stutter right? MSAA is low, you have two more levels above this which would put it into single digits for mins a lot. Even this low on msaa the 580 never gets above 22fps avg...LOL. You want to rethink your comments yet? The 580's avg was 18 FPS! 1440p is NOT for a SINGLE 580...LOL. Only 25fps for 7970...LOL. NOT PLAYABLE on your 7970ghz either. Clearly this game is 1080p huh? Look how much time in the graph 7970ghz spends BELOW 20fps at 1440p. Serious gamers play at 1080p unless they have two cards. FAR CRY 3, same story. 7970ghz is 29fps...ROFL. The 580 scores 21fps...You go right ahead and try to play these games at 1440p. Welcome to the stutterfest my friend.
"GeForce GTX 770 and Radeon HD 7970 GHz Edition nearly track together, dipping into the mid-20 FPS range."
Yeah, Far Cry will be good at 20fps.
Hitman Absolution has to disable MSAA totally...LOL. Even then 580 only hits 40fps avg.
Note the tomb raider comment at 1440p:
"The GeForce GTX 770 bests Nvidia’s GeForce GTX 680, but neither card is really fluid enough to call the Ultimate Quality preset smooth."
So 36fps and 39fps avg for those two is NOT SMOOTH. 770 dropped to 20fps for a while.
A titan isn't even serious overkill for 1080p. It's just good enough and for hardocp a game or two had to be turned down even on it at 1080p! The data doesn't lie. Single cards are for 1080p. How many games do I have to show you dipping into the 20's before you get it? Batman AC barely hits 30's avg on 7970ghz with 8xmsaa and you have to turn physx off (not nv physx, phsyx period). Check tom's charts for gpus.
In hardocp's review of 770gtx 1080p was barely playable with 680gtx and everything on. Upping to 2560x1600 caused nearly every card to need tessellation down and physx off in Metro Last Light. 31fps min on 770 with SSAA OFF and Physx OFF!
http://hardocp.com/article/2013/05/30/msi_geforce_...
You must like turning stuff off. I don't think you're a serious gamer until you turn everything on and expect it to run there. NO SACRIFICING quality! Are we done yet? If this article really tells you to pair expensive gpus ($400-1000) with a cheapo $115 AMD cpu then they are clearly misleading you. It looks like is exactly what they got you to believe. Never mind your double gpu comment paired with the same crap cpu adding to the ridiculous claims here already.
Calinou__ - Friday, June 7, 2013 - link
"Serious gamers play at 1080p unless they have two cards."Fun fact 2: there are properly coded games out there which will run fine in 2560×1440 on mid-high end cards.
TheJian - Sunday, June 9, 2013 - link
No argument there. My point wasn't that you can't find a game to run at 1440p ok. I could cite many, though I think most wouldn't be maxed out doing it on mid cards and surely aren't what most consider graphically intensive. But there are FAR too many that don't run there without turning lots of stuff off as many sites I linked to show. Also 98.75% of us don't even have monitors that go above 1920x1200 (I can't see many running NON-Native but it's possible), so not quite sure fun fact2 matters much so my statement is still correct for nearly 99% of the world right? :) There are probably a few people in here who care what the top speed of a Veyron SS is (maybe they can afford one, 258mph I think), but for the vast majority of us, we couldn't care less about it since we'll never buy a car over 100K. I probably could have said 50K and still be right for most.Your statement kind of implies coders are lazy :) Not going to argue that point either...LOL. Not all coders are lazy mind you...But with so much power on pc's it's probably hard not to be lazy occasionally, not to mention they have the ability to patch them to death afterwards. I can handle 1-2 patches but if you need 5 just to get it to run properly after launch on most hardware (unless adding features/play balancing etc like a skyrim type game etc) maybe you should have kept it in house for another month or two of QA :) Just a thought...
Sabresiberian - Wednesday, June 5, 2013 - link
So, you want an article specifically written for gaming at 2560x1440 to do the testing at 1920x1080?Your rant starts from that low point and goes downhill from there.
TheJian - Thursday, June 6, 2013 - link
You completely missed the point. The article is testing for .87% of the market. That is less than one percent. This article will be nice to reprint in 2-3yrs...Then it may actually be relevant. THAT is the point. I think it's a pretty HIGH point, not low, and that fact that you choose to ignore the data in my post doesn't make it any less valid or real. Nice try though :) Come back when you have some data actually making a relevant point please.Calinou__ - Friday, June 7, 2013 - link
So, all the websites that are about Linux should shut down because Linux has ~1% market share? Nope.TheJian - Sunday, June 9, 2013 - link
The comparison doesn't make sense. This is about making false claims and misrepresenting data. What does that have to do with linux? Come back when you have a decent argument about the data in question here.UltraTech79 - Saturday, June 22, 2013 - link
Thats a pretty shitty point.Jon Irenicus - Sunday, June 16, 2013 - link
who cares what most of the market has, 1440p monitors are in the 300 dollar range from the korean ebay sellers, just because a bunch of no nothings did not get the memo and get one of those better monitors and spent all their cash upgrading their cpu/gpus with their crap 1080p monitors does not mean reviews should not focus on where people SHOULD go.1080p is a garbage resolution for large displays when you have easy and CHEAP access to 1440p. I got one of those monitors, it's beautiful. The problem is not the 4% that are higher than 1080/1200p, is the rest of you who are too cpu focused to get a better monitor.
I mean jesus people, you sit and stare at that thing ALL DAMN DAY, and people actually spend HUNDREDS of dollars on multi gpu setups and high end cpus to game at 1080p... it's submental. YOU and others need to stop complaining about a lack of focus on 1080p, and get on board the 1440p train. You don't have that? well get it, stop lagging, you are choosing an inferior setup and complaining to anandtech because they chose not to focus on your crap resolution monitor?
It's almost as if you specifically cripple your gaming resolution just so you can feel more satisfied at how much faster the intel cpus beat out the amds. Well, you're right, they do, and you still chose an inferior gaming resolution, stop living in the ghetto of the pc gaming world and move higher.
UltraTech79 - Saturday, June 22, 2013 - link
I stopped reading at "no nothings". Lol what a ranting lunatic.metasyStratS - Thursday, June 6, 2013 - link
"This is another attempt at covering for AMD and trying to help them sell products... Anandtech is MISLEADING you at best by showing a resolution higher than 98.75% of us are using and tapping out the single gpu..."You could also easily argue that the article is helping to sell Intel's 4770K, providing data that misleadingly (though not falsely) indicates the superiority of the 4770K over the 2500K/3770K group.
For the majority gamers, it is indeed misleading to focus on 1440p only. For a good number, it is also misleading to focus only on stock clocks.
As you point out, at 1080p, overclocking does help (though the benefit has to be weighed against the cost of upgraded cooling). As as others in forums have pointed out, 2700K vs. 3770K is roughly equal: with any given aftermarket cooler, a 3770K at 'Maximum Stable Overclock' will have roughly the same performance as a 2700K at 'Maximum Stable Overclock', will run hotter than a 2700K, but will consume less energy, and so on...
On the other hand, preliminary indications are that for the majority of overclockers (those who do not want to spend as much for a custom water-cooling loop as for the CPU itself), a 4770K is a worse bet, as it apparently runs hotter than even the 3770K, and the gains in 'Instructions per Clock' likely do not make up for what would thus be a reduced 'Maximum Stable Overclock.' See here: http://forums.pureoverclock.com/cpu-overclocking/2...
In short: CPU overclocking yields a tangible benefit for 1080p gamers, and for the majority of CPU Overclockers (those who do not want to spend as much for a custom water-cooling loop as for the CPU itself), the 4770K appears to be something LESS than a 3770K or 2700K.
TheJian - Thursday, June 6, 2013 - link
I didn't say anything about overclocking. Maybe one of the quotes did? My statements are pure chip to chip, no overclocking discussed. Maybe you were replying to someone else?The article is isn't helping to sell 4770k's when he says single gpu owners (98% according to steampowered survey) can play fine on a A8-5600. Single GPU owners again, according to the survey are NOT running above 1920x1200. So AMD gets killed unless you pull a stunt like anandtech did here as the benchmarks in the links I pointed to show.
I did not point out overclocking at 1080p helps. I made no statement regarding overclocking, but Intel wins that anyway.
Obsoleet - Thursday, June 6, 2013 - link
DOWN WITH THE 1.25%!!Calinou__ - Friday, June 7, 2013 - link
Fun fact: the A10-5800K's upside is its IGP, not the processor part.If you want to do gaming with an AMD CPU you better pick a FX-6xxx or a FX-8xxx.
leon park - Saturday, November 14, 2020 - link
<a href="https://getmecrack.com/simplify3d-torrent-license-... Simplify 3D Torrent </a>Have You Seen These Amazing Links Check it out Please
<a href="https://portabledownloads.com/autodesk-3ds-max-202... Auto Desk 3ds </a>
dishayu - Tuesday, June 4, 2013 - link
I'm sorry if i missed this info while reading but does Haswell come with dual link DVI support? You know, so that i can drive my 1440p displays for everyday usage, since i don't game all that much.Mobilus - Tuesday, June 4, 2013 - link
The problem isn't Haswell, the problem is the mainboard. You would need a mainboard that supports dual-link and at least with the older generations that feature wasn't implemented. Unless the usual suspects changed that with their new offerings, you will have to use a displayport to dvi adapter to get that resolution without a dedicated card (hdmi on mainboards is usually restricted to 1080p as well, unless... see above).K_Space - Tuesday, June 4, 2013 - link
I know Anandtech hasn't got to review the Richland desktop variants yet; but surely if the current recommendation is a trinity APU, surely a >10% performance increase and a lower TDP would clench it for Richland?The newly launched top end A8 6600K is £20 more than the A8 5600K.... but that's launch price.
MarcVenice - Tuesday, June 4, 2013 - link
Please, for the love of god, add a game like Crysis 3 or Far Cry 3. Your current games are all very old, and you will see a bigger difference in newer games.garrun - Tuesday, June 4, 2013 - link
Agree with request for Crysis 3. It has enough options to deliver a great visual experience and GPU beating, and it also scales well to multi-monitor resolutions for testing at extremes.BrightCandle - Tuesday, June 4, 2013 - link
gamegpu.ru have done a lot of testing on all games with a variety of CPUs. Anandtech's choice of games actually edge cases. Once you start looking at a wider list of games (Just do a few CPUs but lots of games) you'll see a much bigger trend of performance difference especially in a lot of the non AAA titles. Around 50% of games show a preference for 3930k's at this point over a 2600k, so more multithreading is start to appear but you need to test a lot more games or you wont catch that trend and instead come to a misleading conclusion.ninjaquick - Tuesday, June 4, 2013 - link
I am not sure that the CPU is used any more in more recent games. This is a CPU test, and testing older games that are known to be CPU dependent is a must.Moving forward, with the next gen consoles that is, testing the absolute newest multiplatform games will be a bit more relevant. However, even Farcry 3 and Crysis 3 are mostly GPU bound, so there will be little to no difference in performance by changing the CPUs out.
superjim - Tuesday, June 4, 2013 - link
Was thinking the same. Tomb Raider, BF3, Crysis 3, hell even Warhead would be good.garrun - Tuesday, June 4, 2013 - link
I think Supreme Commander or Supreme Commander 2 would make an excellent CPU demo. Those games have been, and remain CPU limited in a way no other games are, and for good reasons (complexity, AI, unit count), rather than poor coding. A good way to do this is to record a complex 8 player game against AI and then play it back at max speed, timing the playback. That benchmark responds pretty much 1:1 with clock speed increases and also has a direct improvement effect on gameplay when dealing with large, complex battles with thousands of units on map. The upcoming Planetary Annihilation should also be a contender for this, but isn't currently in a useful state for benchmarking.Traciatim - Tuesday, June 4, 2013 - link
I kind of hope Planetary Annihilation will have both server and client benchmarks available, since this seems like it would be a pretty amazing platform for benchmarking.IanCutress - Tuesday, June 4, 2013 - link
Interesting suggestion - is SupCom2 still being updated for performance in drivers? Does playback come out with the time automatically or is it something I'll have to try and code with a batch file. Please email me with details if you would like, I've never touched SupCom2 before.Ian
yougotkicked - Tuesday, June 4, 2013 - link
this sounds quite interesting, though I wonder if the AI is runtime bound rather than solution bound, as this could make the testing somewhat nondeterministic.To clarify what I mean; a common method in AI programming is to let algorithms continue searching for better and better solution, interrupting the algorithm when a time limit has passed and taking the best solution it has found so far. Such approaches can result in inconsistent gameplay when pitting multiple AI units against each other, which may change the game state too much between trials to serve as a good testing platform.
Even if the AI does use this approach it may not bias the results enough to matter, so I guess the only way to be sure is to run the tests a few times and see how consistent the results are on a single test system.
Zoeff - Tuesday, June 4, 2013 - link
Forget about SupCom2 - That game has been scaled down quite a bit compared to SupCom1 and isn't as demanding to CPUs. There's also an active SupCom1 community that has and still is pushing out community made patches. :-)SupCom actually has a build-in benchmark that plays a scripted map with some fancy camera work. Anyone can launch this by adding "/map perftest" to your shortcut. That said, it doesn't seem to be working properly anymore after several patches nor does it actually give any useful data as the sim score is capped at 10k for today's CPUs. And yet it's extremely easy to cripple any CPU you throw at it when simply playing the game. Just open up an 81x81km map with 7 AI enemies and watch your computer slow to a crawl as the map starts filling up.
And yes, the AI is "solution bound". Replays of recorded games with AI in them wouldn't work otherwise.
I wonder if somebody could create a custom SupCom1 benchmark... *Hint Hint*
FBB - Tuesday, June 4, 2013 - link
They've had over 5 million concurrent online users. The total number will be much higher.DanNeely - Tuesday, June 4, 2013 - link
What exactly does Steam count as online? Does just having the client sit in my tray count; or do I need to be playing a steam game at the time to be counted?wicko - Tuesday, June 4, 2013 - link
Definitely just signed in: 823,220 Players In-Game | 4,309,324 Players OnlineSource: http://steamcommunity.com/
chizow - Tuesday, June 4, 2013 - link
Thanks for the tests, there's a lot of data points in there so that's always appreciated.I would've liked to have seen some higher perf Nvidia solutions in there though, at the very least some Kepler parts. It looks like a lot of the higher end Intel parts hit a GPU bottleneck at the top, which is not unexpected at 1440p with last-gen Fermi parts.
What it does show for sure is, you may give pause to going beyond 2-way CF/SLI if you have to go lower than x8 on that 3rd slot. Which means you will probably have to shell out for one of the pricier boards. Hard not to recommend X79 at this point for 3-way or higher, although the lack of official PCIe 3.0 support was a red flag for me.
I went with the Gigabyte Z87x UD4 because I don't ever intend to go beyond 2-way SLI and the 3rd slot being x4 (2.0) was better than the x8/x4/x4 (3.0) config on most boards, which gives me the option to run a PhsyX card and retain x8/x8 (3.0) for my two main cards.
Gunbuster - Tuesday, June 4, 2013 - link
So I'll stick with my 2600K @4.5ghz and continue to ponder what new Korean 27" LCD to get. Tech is pretty boring at the moment.wicko - Tuesday, June 4, 2013 - link
I haven't bothered overclocking my 2600K and I still feel it's plenty powerful. I think I may get a second GTX 670 though, Metro Last Light doesn't run all that great at 2560x1440.kallogan - Tuesday, June 4, 2013 - link
Haswell, haswell, haswell. Making one paper per day about it will not make it better. Boring cpu gen. Wake me up when something interesting shows up.chizow - Tuesday, June 4, 2013 - link
So I guess the solution is to just ignore the launch to placate all those who have no interest in the launch, rather than post reviews and info about it for the ones that actually do? Doesn't make a lot of sense.If it doesn't interest you, move along.
Dentons - Tuesday, June 4, 2013 - link
He's complaint is on the mark. Haswell is about mobile, not desktop, not gaming.Ivy Bridge was about cost reduction, Haswell is about reducing TDP. It is shocking that a mid-range 2+ year old Sandy Bridge desktop part is still so very competitive, even though it's been superseded by two whole generations.
Intel deserves all this criticism and more. They've clearly put the interests of desktop users and gamers far onto the back burner. They're now focused on almost entirely mobile and are treading water with everything else.
takeship - Tuesday, June 4, 2013 - link
Eh, how can you blame them? The pure play desktop market has been shrinking for a while now, with high performance desktop (basically gamers) even more of a niche. Maybe if they had some real competition from AMD in single threaded perf... A lot of this is just Amdahl's law at it's natural conclusion. The easy performance gains are mostly gone, so if you're Intel do you dump endless money into another 25-30% per generation, or go after the areas that haven't been well optimized yet instead? Not a hard choice to make, especially considering the market moves towards mobile & cool computing in the last decade.Silma - Wednesday, June 5, 2013 - link
Intel doesn't deserve criticism. Haswell is a small improvement over Ivy Bridge because it has become extremely difficult to optimize and already excellent processor. Do you see anything better from AMD, ARM, Oracle or others?Is there a need to upgrade from Ivy to Haswell? No. Was it necessary to upgrade from Nethalem to Sandy Bride? No. The fact is that for most applications processors have been good enough for years and money is better spent on ssds, gpus and whatnot.
The real conclusion of this article should be that processors absolutely do not matter for gaming and that the money is better spent on speedier gpu. Processors may become relevant for the very very very few people that have extreme 2/3x extreme cards. Even a setup with 2 middle cards such as gtx 560 is not cpu dependent. I would welcome actual statistics from the number of players with 2x 3x high-end gpus. I'm quite sure the count is ultra tiny and for those people willing and able to spend thousand of dollars, do you think 100$ less on a processor is relevant?
chizow - Wednesday, June 5, 2013 - link
I don't have a problem with the conclusion he comes to, complaining about dissemination of information to come to that conclusion is what makes no sense. Put all the information out there, 1, 2, 3 articles a day np, then make your own informed decision on the platform. Bemoan the fact there is actual coverage a day or two after launch and one or two reviews? Makes no sense.Memristor - Tuesday, June 4, 2013 - link
Too bad that Richland, which is available as of today, didn't make it into this review. Other than that great read.eddieobscurant - Tuesday, June 4, 2013 - link
many of us have a q6600 @ 3600mhz, and personally i'm very happy this and my 7870. I would still like to see a comparison of my cpu @ 3600mhz, with the modern cpus because i don't think there is a huge difference in games.chizow - Tuesday, June 4, 2013 - link
It depends what you play, any game that is CPU limited is going to be HUGE difference with that CPU. I had the same chip at 3.6GHz, which was great btw, and even when I upgraded to 920 @4GHz there was huge improvement in some games, most notably GTA4 at the time. Some other games that scale extremely well with CPU are WoW, Diablo 3, etc. just to name a few.medi02 - Wednesday, June 5, 2013 - link
Nah. Most of the tests show that to get CPU limited you need a multi-GPU setup.i7 and intel mobo will cost you about 500$ with marginal improvements.
chizow - Wednesday, June 5, 2013 - link
Sorry, just not true. Even with just 1x680 WoW and other similarly CPU dependent games scale tremendously well with faster CPUs:http://www.tomshardware.com/reviews/fx-8350-visher...
Q6600 @ 3.6 is probably just a tad faster than the Phenom IIs in that test.
TheJian - Thursday, June 6, 2013 - link
See my comments here...Chizow is correct, and even understating it some. There are a LOT of games cpu limited as I showed in my links. Huge differences in cpu perf from A10-5800 up to 4770k, never mind the junk Ian recommends here A8-5600 for single gpu. It just isn't correct to recommend that cpu or even A10-5800K which I showed getting smacked around in many games at 1080p. Articles like this make people think games are not cpu bound (it's far more games than Civ5). Neverwinter, Metro Last light, tomb raider, Farcry3, Crysis 3 etc etc...Once 20nm comes we may find even 1440p showing just as many limited by cpu. If rumors are true Volcanic doubles stream processors. I'm sure NV will match that. You end up gpu bound when you up the res to 1440 on single cards now, but that won't be forever and 98.75% of us according to steam don't play at 1440p (.87%) or above (1.25% total of all res above 1920x1200).Check the 1080p data on my links (techreport was a good one as they show 1080p in most of the listed games). Toms shows neverwinter as I noted needing a very high cpu also. Hit all comments on this article, and Ctrl-F my name. Ignore my post comments and just click the links in them to prove Chizow's point (and my own). CPU is important at 1080p and 1920x1200 NOW and will be important at higher res with the next gen cards at 20nm. You will never get out of your AMD mistake if you take this article's suggestions. Well at least not without changing to an Intel board/chip...LOL. Who wants to do that? Just buy an Intel unless you're broke. Don't trust me though, read the links provided and judge for yourself how accurate anandtech is here.
I showed some games that are nearly DOUBLE on Intel vs. A10-5800K! You don't have to like the way I made my point or believe me, just check the links :) They all say the same thing. CPU is an issue just as Chizow shows in his link. You can find this in many cpu articles where they use a top gpu (usually 7970/680) and test new cpus with the oldies in there too which show large separations. Check i7-3770k or fx 8350 articles (just google those two cpu models and "review" for ample sites showing the spreak)...1080p separates the men from the boys in cpu's.
After you check the links (and chizow's), come back and agree Anandtech needs to change their ways, or tear my comments apart if I'm lying :) Future gpu's will only make our point stick out even more. CPU matters. Also note a lot of the games that are gpu limited on single cards are NOT playable anyway (check sleeping dogs right here in this article 1440p...7970 at 28fps avg is NOT playable, mins will dip to 20's or below). So you're forced back into cpu limited in a lot of cases at 1080p. Where 98.75% of us play you see cpu limits a lot.
Go back one page on Chizow's link to Skyrim's benchmark in the same article for the same data. 1080p 3770 scores 88.2 to 8350's 67.4 (that's a lot and a huge hint to how your future on AMD will look)
http://www.tomshardware.com/reviews/fx-8350-visher...
That's a 30% difference and an 8350FX is far faster than an A8-5600 Ian recommends here. Chizow is even more right if you toss in Ian's recommendation of an even slower cpu than 8350 vs. Intel's stuff. Even in skyrim at 1680x1050 they separate from 90fps to 68fps for 8350fx. So until you completely tap out your gpu (1440p and up which basically requires 2+ cards) you will notice if your cpu is junk or not. Since this article is only written for apparently 1.25% of the readership (or world for that matter according to steam survey), you will notice the cpu! Unless you're raising your hand as the 1.25% :) I don't call 30-100% faster marginal improvements do you? Add CIV 5 also which this site even proves in this article ;) At least they got something right.
TheJian - Thursday, June 6, 2013 - link
http://www.tomshardware.com/reviews/a10-6700-a10-6...Check the toms A10-6800 review. With only a 6670D card i3-3220 STOMPS the A10-6800k with the same 6670 radeon card in 1080p in F1 2012. 68fps to 40fps is a lot right? Both chips are roughly $145. Skyrim shows 6800k well, but you need 2133memory to do it. But faster Intel cpu's will leave this in the dust with a better gpu anyway.
http://www.guru3d.com/articles_pages/amd_a10_6800k...
You can say 100fps is a lot in far cry2 (it is) but you can see how a faster cpu is NOT limiting the 580 GTX here as all resolutions run faster. The i7-4770 allows GTX 580 to really stretch it's legs to 183fps, and drops to 132fps at 1920x1200. The FX 8350 however is pegged at 104 for all 4 resolutions. Even a GTX 580 is held back, never mind what you'd be doing to a 7970ghz etc. All AMD cpu's here are limiting the 580GTX while the Intel's run up the fps. Sure there are gpu limited games, but I'd rather be using the chip that runs away from slower models when this isn't the case. From what all the data shows amongst various sites, you'll be caught with your pants down a lot more than anandtech is suggesting here. Hopefully that's enough games for everyone to see it's far more than Civ5 and even with different cards affecting things. If both gpu sides double their gpu cores, we could have a real cpu shootout in many things at 1440p (and of course below this they will all spread widely even more than I've shown with many links/games).
roedtogsvart - Tuesday, June 4, 2013 - link
Hey Ian, how come no Nehalem or Lynnfield data points? There are a lot of us on these platforms who are looking at this data to weigh vs. the cost of a Haswell upgrade. With the ol' 775 geezers represented it was disappointing not to see 1366 or 1156. Superb work overall however!roedtogsvart - Tuesday, June 4, 2013 - link
Other than 6 core Xeon, I mean...A5 - Tuesday, June 4, 2013 - link
Hasn't had time to test it yet, and hardware availability. He covers that point pretty well in this article and the first one.chizow - Tuesday, June 4, 2013 - link
Yeah I understand and agree, would definitely like to see some X58 and Kepler results.ThomasS31 - Tuesday, June 4, 2013 - link
Seems 1440p is too demanding on the GPU side to show the real gaming difference between these CPUs.Is 1440p that common in gaming these days?
I have the impression (from my CPU change experiences) that we would see different differences at 1080p for example.
A5 - Tuesday, June 4, 2013 - link
Read the first page?ThomasS31 - Tuesday, June 4, 2013 - link
Sure. Though still for single GPU, it would be a wiser choice to be "realistic" and do 1080p that is more common (on single monitor average Joe gamer type of scenario).And go 1440p (or higher) for multi GPUs and enthusiast.
The purpose of the article is choosing a CPU and that needs to show some sort of scaling in near real life scenarios, but if the GPU kicks in from start it will not be possible to evaluate the CPU part of the performance equation in games.
Or maybe it would be good to show some sort of combined score from all the test, so the Civ V and other games show some differentation at last in the recommendation as well, sort of.
core4kansan - Tuesday, June 4, 2013 - link
The G2020 and G860 might well be the best bang-for-buck cpus, especially if you tested at 1080p, where most budget-conscious gamers would be anyway.Termie - Tuesday, June 4, 2013 - link
Ian,A couple of thoughts for you on methodology:
(1) While I understand the issue of MCT is a tricky one, I think you'd be better off just shutting it off, or if you test with it, noting the actually core speeds that your CPUs are operating at, which should be 200MHz above nominal Turbo.
(2) I don't understand the reference to an i3-3225+, as MCT should not have any effect on a dual-core chip, since it has no Turbo mode.
(3) I understand the benefit of using time demos for large-scale testing like what you're doing, but I do think you should use at least one modern game. I'd suggest replacing Metro2033, which has incredibly low fps results due to a lack of engine optimization, with Tomb Raider, which has a very simple, quick, and consistent built-in benchmark.
Thanks for all your hard work to add to the body of knowledge on CPUs and gaming.
Termie
IanCutress - Tuesday, June 4, 2013 - link
Hi Ternie,To answer your questions:
(1) Unfortunately for a lot of users, even DIY not just system integrators, they leave the motherboard untouched (even at default memory, not XMP). So choosing that motherboard with MCT might make a difference in performance. Motherboards without MCT are also different between themselves, depending on how quickly they respond to CPU loading and ramp up the speed, and then if they push it back down to idle immediately in a low period or keep the high turbo for a few seconds in case the CPU loading kicks back in.
2) This is a typo - I was adding too many + CPU results at the same time and got carried away.
3) While people have requested more 'modern' games, there are a couple of issues. If I release something that has just come out, the older drivers I have to use for consistency will either perform poorly or not scale (case in point, Sleeping Dogs on Catalyst 12.3). If I am then locked into those drivers for a year, users will complain that this review uses old drivers that don't have the latest performance increases (such as 8% a month for new titles not optimized) and that my FPS numbers are unbalanced. That being said, I am looking at what to do for 2014 and games - it has been suggested that I put in Bioshock Infinite and Tomb Raider, perhaps cut one or two. If there are any suggestions, please email me with thoughts. I still have to keep the benchmarks regular and have to run without attention (timedemos with AI are great), otherwise other reviews will end up being neglected. Doing this sort of testing could easily be a full time job, which in my case should be on motherboards and this was something extra I thought would be a good exercise.
Michaelangel007 - Tuesday, June 4, 2013 - link
It is sad to poor journalism in the form of excuses in an otherwise excellent article. :-/1. Any review sites that make excuses for why they ignore FCAT just highlights that they don't _really_ understand the importance of _accurate_ frame stats.
2. Us hardcore games can _easily_ tell the difference betwen 60 Hz and 30 Hz. I bought a Titan to play games at 1080p @ 100+ Hz on the Asus VG248QE using nVidia's LightBoost to eliminate ghosting. You do your readers a dis-service by again not understand the issue.
3. Focusing on 1440 is largely useless as it means people can't directly compare how their Real-World (tm) system compares to the benchmarks.
4. If your benchmarks are not _exactly_ reproducible across multiple systems you are doing it wrong. Name & Shame games that don't allow gamers to run benchmarks. Use "standard" cut-scenes for _consistency_.
It is sad to see the quality of a "tech" article gloss and trivial important details.
AssBall - Tuesday, June 4, 2013 - link
Judging by your excellent command of English, I don't think you could identify a decent technical article if it slapped you upside the head and banged your sister.Razorbak86 - Tuesday, June 4, 2013 - link
LOL. I have to agree. :)Michaelangel007 - Wednesday, June 5, 2013 - link
There is a reason Tom's Hardware, Hard OCP, guru3d, etc. uses FCAT.I feel sad that you and AnandTech tech writers are to stupid to understand the importance of high frame rates (100 Hz vs 60 Hz vs 30 Hz), frame time variance, 99 percentile, proper CPU-GPU load balancing, and micro stuttering. One of these days when you learn how to spell 'ad hominem' you might actually have something _constructive_ to add to the discussion. Shooting the messenger instead of focusing on the message shows you are still a immature little shit that doesn't know anything about GPUs.
Ignoring the issue (no matter how badly communicated) doesn't make it go away.
What are _you_ doing to help raise awareness about sloppy journalism?
DaveninCali - Tuesday, June 4, 2013 - link
Why doesn't this long article include AMD's latest APU, the Richland 6800K? Heck you can even buy it now on Newegg.ninjaquick - Tuesday, June 4, 2013 - link
The data collected in this article is likely a week or two old. Richland was not available at that time. It takes an extremely long time to do this kind of testing.DaveninCali - Tuesday, June 4, 2013 - link
Richland was launched today. Haswell was launched two days ago. Neither CPU was available two weeks ago. It all depends on review units being released to review websites. Either Richland was left out because it wasn't different enough from Trinity to matter or AMD did not hand out review units.majorleague - Wednesday, June 5, 2013 - link
Here is a youtube link showing 3dmark11 and windows index rating for the 4770k 3.5ghz Haswell. Not overclocked.Youtube link:
http://www.youtube.com/watch?v=k7Yo2A__1Xw
Chicken76 - Tuesday, June 4, 2013 - link
Ian, in the table on page 2 there's a mistake: the Phenom II X4 960T has a stock speed of 3 GHz (you listed 3.2 GHz) and it does turbo up to 3.4 GHz.gonks - Tuesday, June 4, 2013 - link
Great work Ian! Definitely waiting to see i5-3570K added into the mix, to see how it compares to the i5-2500k (and the 3570k beeing more futureproof thanks to PCIe 3.0).Harby - Tuesday, June 4, 2013 - link
Excellent review, though it would be awesome to see World of Warcraft and Rift in there. Both are heavily relying on CPU.yougotkicked - Tuesday, June 4, 2013 - link
As always, thanks for the great article and hard work Ian.I'd really like to see how a few of the tests scale with overclocked CPU's, notably those in which the sandy bridge processors were competitive with ivy bridge and haswell parts. Obviously overclocking introduces a lot of variables into your testing, but it would be very interesting to see a few of the popular choices tested (sandy bridge parts @ 4.5 are quite common, and many users on such a system were waiting for haswell before they upgrade).
eBombzor - Tuesday, June 4, 2013 - link
Crysis 3 benchmarks PLEASE!!frozentundra123456 - Tuesday, June 4, 2013 - link
Interesting results, but very limited as well. Why test at a resolution used by only 4% of the players?I would have rather seen the results at 1080p, over a wider variety of games. Especially RTS games and newer games like crysis 3, FC3, and Tomb Raider. I tested Heart of the Swarm on my computer with a HD7770 and i5 2320 and was able to max out the cpu in a 10 player skirmish match at ultra, 1080p. So I am sure an A8-5600 would be limiting in that case.
Even considering the results only of the games tested, the A8-5600k seems a strange choice. The i3 seems just as valid, considering it is equal or faster in every game but one, while using less power.
makerofthegames - Tuesday, June 4, 2013 - link
Question - are those blank entries for the Xeons because they could not run, or just that data was not collected for them?Awful - Tuesday, June 4, 2013 - link
Glad to see there's no reason to upgrade the i5-2500k in my desktop yet - still happily chugging away at 4.9GHz after 2 years!holistic - Tuesday, June 4, 2013 - link
Ian,Thank you, for your time, effort, and energy in compiling an encyclopedic database on the effects of cpu on single and multi gpu configurations, in alternate gaming/engine scenarios. Your work is insightful, informative, and wholly devoted to the science of benchmarking. This approach has helped me, as a relatively new computer enthusiast, to more deeply understand testing methodology in the computing field.
I am interested in the pure CPU benchmarks of Starcraft 2 with the 4770k and 4670k. I understand this game is not optimized, is directx9, and is extremely cpu limited with only 2 maximum cores active, and thus not in top priority for providing benchmarks. Will haswell be added to the benchmarking database for sc2?
Cheers,
Craig
khanov - Tuesday, June 4, 2013 - link
Ian, I have to say (again) that i7-3820 should be in this review.You say that i7-4770K is a better value proposition than Sandy Bridge-E (X79), I assume because you are only thinking of the expensive 6 core X79 CPU's. That changes if you do consider i7-3820.
X79 brings far better support for multi-gpu setups with enough PCIe lanes to feed multiple cards quite happily. No PLX needed. Pair that with an i7 3820 (cheaper than i7-3770K/i7-4770K) and you may find the performance surprisingly good for the price.
chizow - Friday, June 7, 2013 - link
I considered the 3820 numerous times (it's cheap at MC, same price as high-end 3770K/4770K) but I shy away because it inexplicably performs *WORST* than 2700K/3770K/4770K. I don't know why, it has more L3 cache, and is clocked higher before/after boost. Just an oddball chip.Besides, X79 as a platform was dated almost as soon as it released. PCIe 3.0 support is spotty with Nvidia (reg hack not guaranteed), no native USB 3.0 and no full SATA 6G support. I went for Z87 + 4770K instead because X79 + 3820 didn't offer any noticeable advantages while carrying a significant higher price tag (board price).
TheJian - Wednesday, June 5, 2013 - link
So if you take out the 1920x1200 from the steam survey (4.16 - 2.91% right?), you've written an article for ~1.25% of the world. Thanks...I always like to read about the 1% which means absolutely nothing to me and well, 98.75% of the world.WHO CARES? As hardocp showed even a Titan still can't turn on EVERY detail at even 1920x1080. I would think your main audience is the 99% with under $1000 for a video card (or worse for multigpu) and another $600-900 for a decent 1440p monitor you don't have to EBAY from some dude in Korea.
Whatever...The midpoint to you is a decimal point of users (your res is .87%, meaning NOT ONE PERCENT & far less have above that so how is that midpoint? I thought you passed MATH)?...Quit wasting time on this crap and give us FCAT data like pcper etc (who seems to be able to get fcat results into EVERY video card release article they write).
"What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p. If that applies to all of the 4.6 million gamers currently on steam, we are talking about ~200,000 individuals with setups bigger than 1080p playing games on Steam right now, who may or may not have to run at a lower resolution to get frame rates."
That really should read ~55,000 if you take away the 2.91% that run 1920x1200. And your gaming rig is 1080p because unless you have a titan (which still has problems turning it all on MAX according to hardocp etc to remain playable) you need TWO vid cards to pull off higher than 1920x1200 without turning off details constantly. If you wanted to game on your "Korean ebay special" you would (as if I'd ever give my CC# to some DUDE in a foreign country as Ryan suggested in the 660TI comment section to me, ugh). It's simply a plug change to game then a plug change back right? Too difficult for a Doctor I guess? ;)
This article needs to be written in 3 years maybe with 14nm gpus where we might be able to run a single gpu that can turn it all on max and play above 30fps while doing it and that will still be top rung, as I really doubt maxwell will do this, I'm sure they will still be turning stuff off or down to stay above 30fps min, just as Titan has to do it for 1080p now. Raise your hand if you think a $500 maxwell card will be 2x faster than titan.
1440p yields an overall pixel count of 3,686,400 pixels for a monitor in 1440p resolution, substantially higher than the 2,073,600 pixels found on a 1080p monitor/tv etc. So since Titan is SHORT of playing ALL games maxed on 1080p we would need ~2x the power at say $500 for it to be even called anywhere NEAR mainstream at 1440p right? I don't see NV's $500 range doing 2x Titan with maxwell and that is 6-9 months away (6 for AMD volcanic, ~7-9 for NV?). Raise your hand if you call $500 mainstream...I see no hands. They may do this at 14nm for $300 but this is a long ways off right and most call $200 mainstream right? Hence I say write this in another 3yrs when the 1080p number of users in the steam survey (~31%) is actually the 1440p#. Quit writing for .87% please and quit covering for AMD with FCAT excuses. We get new ones from this site with every gpu article. The drivers changed, some snafu that invalidated all our data, not useful for this article blah blah, while everyone else seems to be able to avoid all anandtech's issues with FCAT and produce FCAT after FCAT results. Odd you are the ONLY site AMD talked too directly (which even Hilbert at Guru3d mentions...rofl). Ok, correction. IT'S NOT ODD. AMD personal attention to website=no fcat results until prototype/driver issues are fixed....simple math.
http://www.alexa.com/siteinfo/anandtech.com#
Judging your 6 month traffic stats I'd say you'd better start writing REAL articles without slants before your traffic slides to nothing. How much more of a drop in traffic can you guys afford before you switch off the AMD love? Click the traffic stats tab. You have to be seeing this right Anand? Your traffic shows nearly in half since ~9 months ago and the 660TI stuff. :) I hope this site fixes it's direction before Volcanic & Maxwell articles. I might have to start a blog just to pick the results of those two apart along with very detailed history of the previous articles and comments sections on them. All in one spot for someone to take in at once I'm sure many would be able to do the math themselves and draw some startling conclusions about the last year on this site and how it's changed. I can't wait for Ryan's take on the 20nm chips :)
Laststop311 - Wednesday, June 5, 2013 - link
Who actually buys a computer and does nothing but game on it every second they are on it? That's why the A8-5600k should not be the recommended cpu. Just gonna drag you down in every other thing you do with the computer. The i5-2500k should be here too. You can get them for a STEAL on ebay used I've seen them go for around 140-150. Sure you can pay 100-110 on ebay for the a8-5600k is a 40 dollar savings worth that much performance loss?TheJian - Sunday, June 9, 2013 - link
I didn't even go into this aspect (it's not just about gaming as you say clearly). But thanks for making the other 1/2 of my argument for me :)Your statement plus mine makes this whole article & it's conclusions ridiculous. Most people buy a PC and keep it for over 3yrs, meaning you'll be punished for a LONG time every day in everything you do (gaming, ripping, rar, photos etc etc). AMD cpu's currently suck for anyone but very poor people. Even for the poor, I'd say save for another month or two as $50-100 changes the world for years for your computing no matter what you'll use it for. Or axe your vid card for now and by a higher end intel. Survive for a bit until you can afford a card to go into your machine. AMD just isn't worth it for now on desktops. I'm an AMD fan, but the computing experience on Intel today is just better all around if you ever intend on putting in a discrete card worth over say $100 and this comment only gets worse as gpu's improve leaving your cpu behind.
You will get more cpu limited every year. Also it's much easier to change gpu's vs cpu's (which usually requires a new board for substantial gains unless you really buy on the low-end). Having said that, buying low-end haswell today gets you a broadwell upgrade later which should yield some decent gains since it's 14nm. Intel is just hard to argue against currently and that is unfortunate for AMD since the bulk of their losses is CPU related and looks to just get worse (the gpu division actually made ~15mil or so, while cpu side lost 1.18B!). Richland changes nothing here, just keeps the same audience it already had for total losses. They need a WINNER to get out of losses. Consoles may slow the bleeding some, but won't fix the losses. Steamroller better be 30-40% faster (10-20% is not enough, it will again change nothing).
firefreak111 - Wednesday, June 5, 2013 - link
Quote: What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p. If that applies to all of the 4.6 million gamers currently on steam, we are talking about ~200,000 individuals with setups bigger than 1080p playing games on Steam right now, who may or may not have to run at a lower resolution to get frame rates.Wrong. 2.91% is 1200p (1080p at a 16:10 ratio), which is barely higher resolution. 1.25% are truly above 1440p, a much smaller number. ~57 000 gamers compared to 1,380,000 gamers... I respect 1440p, getting a new system to play at that res, but the mainstream isn't any time soon.
I wish I could take this article seriously. You choose 4 games to recommend a CPU (Metro 2033, GPU Bound, Dirt 3, racing game focused on graphics, Civ V, which you knock off as unimportant based on FPS not turn times (which is all anyone really cares about in the late-game) and Sleeping Dogs, which is Open World but doesnt have complex scripting or AI.) and then choose AMD based on 3/4 of the games which are GPU bound and thus not favoring the faster Intel CPU's much?
FPS will only get you so far. Smoothness will be better on the faster CPU's. Anyway, most importantly, if you want to have a serious article with a good recommendation, how about testing CPU bound modern games? Shogun 2, mass AI calculations for many units combined with complex turn times (which is very important in any turn based game). Skyrim, with actually complex AI and large amounts of scripting, which uses the CPU to its utmost. Crysis 3, a good test for a balance of CPU and GPU focus. BF3 Multiplayer, which from personal experience needs a good CPU to play well.
Use Nvidia and AMD GPU's, one could favor the other leading to a better recommendation (This brand for this CPU). Civ V will see large performance gains on a Nvidia card combined with a good CPU, due to its use of deferred contexts (dx11 multithreading) and Nvidia's support of it (AMD seriously needs to step up and support it, most game engines aren't because AMD isn't. Its built into DX11, so support it AMD!).
Lastly, recommend for the mainstream. 1080p is the mainstream. Not 1440p+, which is 1.25% of steam players, 1080, which is more than 30%.
CiccioB - Wednesday, June 5, 2013 - link
I wonder what's the meaning of conducting such a big effort like this to test CPU performances and then making all the systems GPU bottlenecked just to take into consideration 4% of the gaming population.Moreover, some test done with an "old" GTX580 which bottlenecks in those resolution quite soon.
I renew my request of updating the list of games used and using most "popular" video settings in order to make a real comparison of what a gamer may find using the usual setup it may use at home. Monitor bigger than 24" are not popular at all.
Maybe an integration with a SLI/Tri SLI setup and a 5800x resolution may be added, but surely that should not be considered the way things work normally and taken a sdefinitive benchmark results to get some obviously confusing conclusions.
An A10-xxxx is way way behind any i5 CPU, and often even behind some i3 in realgaming. I can't really understand how one can believe in such a suggestion.
I am starting to think that something else rather than objective results are being created and shown here.
TheJian - Sunday, June 9, 2013 - link
AMD only visited ONE website in recent history. ANANDTECH.Also note they pushed this 1440p idea when the numbers were EVEN WORSE in the 660TI article comments section (and even the articles conclusions, we're talking 9 months ago - 1440p is STILL not popular nor above it). See Ryan's exchange in that article with me. He was pushing the Korean Ebay dude then...ROFL. I pointed out then that amazon only had 2 people selling them and they had no reviews (ONE, which was likely the guy that owned the place selling it), no support page, no phone, and their website wasn't even their own domain and email was a gmail address if memory serves. Essentially giving your CC# to some dude in Korea and praying. Which another site mentioned he did pray when ordering a test unit...LOL Techreport's 1440p korean review back then if memory serves. Yet Ryan claimed everyone in the forums was doing this...Whatever... Don't even get me started on Jared's personal attack while ignoring my copious amounts of data proving Ryan's article BS even with using Ryan's own previous article's benchmarks! It's kind of hard to argue against your own data right?
I sincerely hope this site goes back to producing articles on cpu/gpu that are worthy of reading. These days all they do is hide AMD's inadequacies vs. Intel and NV. They are the only site saying things like "buy an A8-5600 for any SINGLE gpu machines"...I can't believe how far they've gone in the last 9 months. Their traffic stats show I'm not alone. The comments here show I'm not alone. AMD can't be paying them enough to throw their whole reputation down the drain. Look what the Sysmark/Bapco/Van Smith scandal did to Tomshardware (Tom even changed all his bylines to "tom's staff" or some crap like that). He had to sell at far less than the site was worth before the damage, and it took years to get back to a better reputation and wash off the stink. Heck I stopped reading in disgust for years and many IT friends did the same. I mean they were running Intel ads in AMD review articles...LOL. I think that is just wrong (the van smith stuff was just unconscionable). For those who remember Van, he still writes occasionally at brightsideofnews.com (I only recently discovered this, also writes on vanshardware but not much analysis stuff). Good to see that.
Pjotr - Wednesday, June 5, 2013 - link
What happened to the Q9400 in the GPU charts, it's missing? No, I didn't read the full article.HappyHubris - Wednesday, June 5, 2013 - link
I know this was addressed in the article, but no 2013 gaming part recommendation should be published based on average FPS.Any Ivy Bridge i3 mops the floor with a 5800K, and I'd imagine that Sandy-based i3s would do so even cheaper. http://techreport.com/review/23662/amd-a10-5800k-a...
Kudos on an article that includes older processors, though...it's nice to see more than 1 or 2 generations in a review.
ArXiv76 - Wednesday, June 5, 2013 - link
Having read technical articles, white papers and tech reviews for over 25 years I can't remember ever reading a "finding perfection" examination. My hypothesis is, does there exist a CPU(all CPU's tested) to GPU(all OEM's tested) mix that is ideal. Obviously speed is king so I am thinking more from an engineering perspective.Does this exist?
Steam and EA online are both great services. If there is a service that takes away physical media it's a huge winner to me. I still have my piles Sierra game boxes stored away.
bigdisk - Wednesday, June 5, 2013 - link
Oh, Anand / Ian CutressYou really shouldn't put your benchmark title and settings within an image. You absolutely want this as text in the page for SEO.
Cheers, good article.
majorleague - Wednesday, June 5, 2013 - link
Here is a youtube link showing 3dmark11 and windows index rating for the 4770k 3.5ghz Haswell. Not overclocked.This is apparently around 10-20fps slower than the 6800k in most games. And almost twice the price!!
Youtube link:
http://www.youtube.com/watch?v=k7Yo2A__1Xw
kilkennycat - Wednesday, June 5, 2013 - link
Quote:" The only way to go onto 3-way or 4-way SLI is via a PLX 8747 enabled motherboard, which greatly enhances the cost of a motherboard build. This should be kept in mind when dealing with the final results."The only way? X79 supports up to 4 8X channels of PCie 2/3.
The 4-core 3820 overclocks readily and on a X79 board is a very small cost enhancement
over a high-end non-PLX8747 1155-socket setup. Plus the upgrade benefit of stepping up to the 6-core 3930K if one wants to combine usage for professional multicore applications with gaming.
random2 - Wednesday, June 5, 2013 - link
"What we see is 30.73% of gamers running at 1080p, but 4.16% of gamers are above 1080p."So an article and benches are provided for the benefit of 4.16% of the gamers who might be running more pixels vs the 65% (almost 3 million) lions share of gamers that must be running at fewer pixels than found at 1080p. Very strange.
Dribble - Thursday, June 6, 2013 - link
Just to point out the blindingly obvious but who would spend big $$$ on a 1440p monitor and a top end gpu and then buy a low end budget cpu (A8-5600)...The realistic min recommendation is going to be a i3570K.
xineis - Thursday, June 6, 2013 - link
So, how would a 955BE perform compared to the CPUs on the test? From what I understand, I should just keep this CPU, as a new one is not going to make much of a difference?Zoatebix - Friday, June 7, 2013 - link
Thank you for doing all this work. A great follow-up to the original!Could you please correct some charts on the CPU Benchmarks page, though? The "Video Conversion - x264 HD Benchmark" section is displaying the charts for the "Grid Solvers - Explicit Finite Difference" section.
Klimax - Saturday, June 8, 2013 - link
Frankly not best article. Resolution too high for GPU and then recommending CPU based on it. CPU, which will not provide performance needed for games. (Techreport showed that APU is not good idea when paired with real GPU; FPS might be in range, but latency is in hell)JNo - Sunday, June 9, 2013 - link
Ian, I'm afraid I have to agree with some of the naysayers here. You've tried so hard to have clean *scientific* analysis that you've failed to see the wood for the trees. In actual fact I fear you've reached the opposite of a scientific conclusion *because* you only focussed on easily obtainable/reproducible results.Just because results for modern games are hard to obtain, doesn't mean you can ignore them despite it being a hard path to walk. I have 1440p but agree that it's not relevant to the vast majority and anyone affording a 1440p monitor won't care to save $40 on AMD A8 vs core i5. So you have to be *realistic* (as well as scientific).
I know from a few years of international finance analysis that when doing an independent study, there is a chance you can come to a conclusion that flies in the face of the market or common opinion. You have to be *SO* careful when this happens and quadruple check what you have ended up with because 99% of the time, the market or 'hive mind' is correct and there is an error or misunderstanding in your own work. After all, the conglomerate conclusion of hundreds of often intelligent people is hardly likely to wrong, even if you are a smart guy. The chance that you have found the truth and that everyone else is wrong really is about 1% (yes it does happen but it is a once in a blue moon type of event).
It might seem a huge hit to admit that much of your hard work was misdirected but it could save more pain in the long run to go back to the drawing board and consider what you are trying to achieve and how best to go about it. A very small sample of older titles at unpopular resolutions really could skew results to be misleading.
CiccioB - Wednesday, June 12, 2013 - link
I agree. However we have still to understand what was the thesis Ian wanted to demonstrate.If it was "AMD CPU don't have to appear so bad vs Intel" the strategy used for the demonstration is quite good.
On the other hand, if it was "Let's see which is the best CPU for playing games" the strategy is a complete fail. And it still is partially the same if it were "Let's see which is the cheapest CPU to cope with a bottlenecked GPU", as those old games, but Civ5, all do not have any complex AI o scripts which are a CPU intensive task .
If I were to judge this work as a homework I would evaluate it as F because it is intended for a small part of the market, using old benchmarks not valid today, incomplete (lack of FCAT) with a wrong setup (bottlenecking GPUs to evaluate CPU performances?).
Wrong on all aspects but, unless said, the intent was to show that AMD CPU are just trailing Intel most expensive ones instead of being a complete generation behind. In this case evaluation can be a B, but becomes quite limited if we look at the represented market (is 3% of a market that is capable of spending well more that an average gamers a good target to demonstrate that they can spare few bucks using an otherwise castrated CPU?)
For all these reasons I may say that this is one of the worst article I have ever read on this site. It show some incompetence or worse a bias.
Filiprino - Thursday, June 20, 2013 - link
It's cool that you test old CPUs, so we can see the improvement of CPU processing power over the years.UltraTech79 - Saturday, June 22, 2013 - link
This article is irrelevant to 95+% of people. What was the point in this? I don't give a rats ass what will be in 3-5 years, I want to know performance numbers for using a setup with realistic numbers of TODAY.Useless.
core4kansan - Monday, July 15, 2013 - link
While I appreciate the time and effort you put into this, I have to agree with those who call out 1440p's irrelevance for your readers. I think if we tested at sane resolutions, we'd find that a low-end cpu, like a G2120, coupled with a mid-to-high range GPU, would yield VERY playable framerates at 1080p. I'd love to see some of the older Core 2 Duos up against the likes of a G2120, i3-3220/5, on up to i5-3570 and higher with a high end GPU and 1080p res. That would be very useful info for your readers and could save many of them lots of money. In fact, wouldn't you rather put your hard-earned money into a better GPU if you knew that you could save $200 on the cpu? I'm hinting that I believe (without seeing actual numbers) that a G2120+high end GPU would perform virtually identically in gaming to a $300+ cpu with the same graphics accelerator, at 1080p. Sure, you'd see see greater variation between the cpus at 1080p, but when we're testing cpus, don't we WANT that?lackynamber - Friday, August 9, 2013 - link
Some people dont really know what they are reading...apparently!!The fact that in every single review someone says anandtech is being paid by someone is actually a good thing. I mean, a month ago a bunch of people said they are trying to sell Intel cpus, and now we have people saying the same shit about AMD.
Furthermore, the whole benchmark is based around 1440p! Calling it bullshit because it is a small niche that has such a monitor is stupid. No body has Titan either, should they not benchmark it? No one runs quad sli either and so on.
Even the guy that flamed Ian admitted that the benchmark bottlenecks the CPU so it makes AMD look better. WELL THATS THE FUCKING POINT. Amd LOOKS better cause it fucking is, taking into consideration that, as long as you have a single card, YOU DONT FUCKING NEED ANY BETTER CPU. That what the review pointed.
All the benchmark and Ian's reccomendation was, that, for 1440p and one video card, since the gpu is already bottlenecking the cpu, get the cheapest you can, which in this case is amd's A8. I mean, why in fucking hell would I want an i7 on 10 ghz if it is left idle scratching balls cause of the GPU? I
zainab12345 - Thursday, August 20, 2020 - link
i have an amd 5450 gpu card and the game runs very slow. how to make my game very smoother while using this card. 'regards
https://hdpcgames.com/dirt-3-download-pc-game/
zainab12345 - Thursday, August 20, 2020 - link
running very smoothly using low end gpuhttps://hdpcgames.com
showbizclan - Saturday, June 26, 2021 - link
I wonder what they mean by "active".Most likely it's a number of users with steam client running.
Well, it runs idle for more than a year for me, yet I'm an "active" user I guess... https://showbizclan.com/japanese-comedy-movies/