I know, that's what makes gamer's continued love of i7's so puzzling. If you look at the die of a Core i7-7700K for instance, the IGP takes up more space that the processing cores! So Intel could have given people twice the cores for the same cost in terms of silicon, which is the determining factor.
Now, I understand that the i7 gave better performance, but now with AMD delivering more cores that are competitive, revealing how Intel has treated them like rubes, gamers still vilify AMD and back Intel.
Well most gamers need really large ATX cases, blinking lights on mouse and keyboard. And every component has a "gaming" attached to it. So generally gamers do not look at logic much if they belong to that class of users.
That's a mighty high horse you've got there purely for disagreeing with somebody's aesthetic choices. Do you sniff at people's Non-Black vehicles or painted walls, too?
Agree... just like a car, love the interior UV led accents tucked away on the new Mercedes s550 Color scemes are just as valid a design consideration if u r going to have a design consideration. But anyone with a holier than thou attitude definitely should be forced back to the days of beige boxes. ( they can get a grey chairman mao pajama uniform of conformity to match. ) Square black with no lighting accents does not mean u r the ruler of taste! :)
Not really, there were some tests conducted and gamers actually just buy all that funny stuff with the gaming in the name. That simpleton marketing trick actually works.
And so do 20 cent LEDs. It is really, really puzzling for me too. But that do I know, I have 2x 200 bucks gaming keyboards, blinking LEDs in one of my PSUs, etc … while complaining about gamers "falling" for cheap advertising tricks.
A lot of gamers are like me, and simply care about best FPS and shortest load times. I will buy whichever CPU/GPU/SSD delivers that, within a reasonable budget. Hating a company is in this case illogical.
I would consider myself a gamer (whilst not as much as I was 10 years ago) I still game each evening a bit. And I can't stand flashy things, I remember even going this far as snipping LEDs from fans that came with a chassis back in the day.
Now I'm perfectly happy with my all black aluminum Lian-Li A76 tower where I just throw in random component, it's my third system in the same chassis, nothing glows, or blinks except for the motherboard lights, and I like it that way.
I don't think it's puzzling at all especially given reviewers mostly do stupid benches. There are few games that actually benefit from 6 or 8 slow cores over 4 fast ones and that mostly only applies to multi-player like BF1 which hardly ever is benched. So the 7700k especially OCed still comes clearly out at the top.
And as has been said for high refresh rate gaming a fast CPU is a must as it must pump out 120+ frames per second. Besides that the launch issues with Ryzen and the fact it is underperforming in gaming compared to other benches will make people hesitant to buy. push them towards the known factor, eg 7700k.
I agree. I'm an AMD fan that'll probably get an R5 1600x, but everything you say about the 7700k is true. If I had a 144fps monitor and played at 1080p, I'd probably get a 7700k. I don't get why some people knock Intel's i7's unnecessarily. They have a use, and it's not stupid to get them over a Ryzen prodcut.
If the usage is just for games it may be exactly stupid to get Itel over the new AMD product. As mentioned there should be issues expected at launch. Fact is that the issues identified so far are nothing major is pretty damn impressive. Look at performance per dollar, it appears that the AMD chips aren't even being harnessed properly yet, games and heck maybe even the windows kernel/scheduler may need some optimizing, yet its competitive. When you factor in the cost of the AMD chips I'm pretty damn impressed. Given this, the long term potential of the AMD chips simply beats the old Intel chips.
Now, if you're using the system for things other than gaming, the Intel chip may have other intangible "benefits" attributed to it, such as stability. Those are going to be very difficult to understand until much more data is available a few months/years down the road.
That's a good point, gamers whine that games don't make use of extra cores, but there's probably a couple of hundred or more threads on a desktop at any time.
Agreed. The extra cores would be nice. Someone already mentioned it but multiplayer matches (which are near impossible to test with any scientific consistency) have different demands than offline benching. A lot of the multiplayer games I play have "stuff hitting the fan" moments where everything is happening at once. In situations like that, I'd probably like an extra couple of threads to shunt background processes.
Granted, Game Mode might help alleviate some of that, especially if you have slower CPU and manually close anything that Game Mode doesn't have domain over (kill your background torrents, close all your browser tabs, etc). But it's not a silver bullet and I think in a lot of real-world situations the extra cores are better than an offline "everything that you normally have running is disabled" lab test on a fresh OS install would indicate.
What is more, use Process Lasso to bind all those background tasks and programs to cores on CCX1 instead of CCX0 and you get much better chance of your game threads remaining on CCX0 where it belongs because Windows scheduler will see cores on CCX1 running more tasks so it will schedule the game to run on CCX0
I get your point but your specific examples are poor. I have cortana disabled, windows update when I click it only, and no AV beyond security essentials.
Now, if you were to say extra things like keyboard/mouse software, teamspeak, etc etc...
This is a false statement. The human eye can detect well beyond 30 FPS. That's like the people who say you can't see past 720P. Well maybe a handful of people can't, but a ton of people can.
People can discern framerate differences above 30 FPS and even well above 60 FPS. That doesn't mean that 30 FPS won't feel fairly smooth (I personally find 30 is perfectly acceptable, but 40-60 is nicer and I'm annoyed if I'm playing a game at 15 FPS -- From the Depths trundles along at 10 FPS on Quadro NVS 160m and it's annoying, but I still play it like that until I can be bothered to get a faster laptop.), but there are reasonable arguments in favor of reaching above 30 including future-proofing and matching up with screen refresh rates.
I don't want to be that guy, but the human eye has no cap on frame rates, it really comes down to how well your brain is trained to interpret what your eyes capture. There have been done tests where jet fighter pilots could clearly see a picture of an enemy plane, shown for 200th of a second and name the type of the plane - that alone suggest that we can at least "see" 200FPS and possibly a lot more, clearly depends from human to human, I'd wager that people who play fast first person shooter games, do notice the difference between 30, 60, 120, and 200 frames quite easily.
Take into account the difference in motherboard cost and they cost ~the same. So if you had a budget of ~$600 for a new CPU and graphics card, which one should you buy ? As a gamer ?
It also depends mostly on the resolution you are using. Anything higher than 1080p then the CPU doesn't matter and AMD could easily be a better choice if you are looking for better productivity. At 1080p we won't know until benchmarks come out on the R5's. If they are anything like the R7 line the will not be capable of high FPS like Intel processors, but may still be capable of high enough FPS that it will be fine and still a big improvement on productivity vs. intel. Also, new games that use api like Vulkan in games like DOOM the FPS is so high it doesn't matter which processor you use! So, AMD would be a good pick-up for the extra productivity!
I'll definitely stay with Intel if they address the competition with a 6 core canon-lake ( which I believe is the case? ) i7-8700k ( or 9700k if 8th gen is coffee lake ) If they are priced well under $300. Otherwise 4 cores or way better than 2 for the same reason 6 WILL be better than 4 soon! Direct X12 is going to leverage as much to offload more power to the GPU. And 1080p is only a thing till the fidelity of 4k is fast. I can barely stand to look at 1080p anymore! ( amazing how the brain iz wired not to favor perceived loss of quality! ) But when someone hands me a phone with a 1080p screen I instantly notice and find as much revoltingly primitive now that my brains is wired for the crisp sharp legibility of a phone gui with a QHD AMOLED screen's fidelity and jet black inky self emitting diode yummyness.
We are at an awful place of high prices on this display tech. When I consider all my monitors temp solutions. Speed is great. But I am really looking forward to rec2020 combined with Samsung's upcoming QLED self emitting variant ( or OLED ). In which case.. Speed is not everything for gamers. particularly when the best talent producing graphics still have to compromise on their render targets making the best graphics even look butt ugly sadly. If Speed was everything no one would be asking if it can run Crysis. We would all be playing CS with the graphics set to "butt ugly". pew pew pew. Kind of sucks that all this color accuracy, fidelity and contrast ratio is all CURRENT TECH. But I know of no gaming monitor that have a combination of all 3 yet?!!! I easily feel like I am being milked! Speed apparently isn't a thing in TV land ( though consoles still claim there is no thing as PC master race.) But it doesn't matter anyway because the 32" TV market that once ruled! Isn't providing an alternative offering 4k HDR10 WCG OLED 32" TVs... Which would at least make do till the gamer branded bait monitors sold as much for $2000! :( Last thing I am worried about is the future of 1080p bangity bangity bangity fps. ( we all can't be the top 5 percent e-sport top dogs ) Hardly representative of anything but a niche concern ( though a valid one! 1080pew pew ) But as someone looking forward to these CPUs IMPROVING the landscape with competition for US INTEL FANS! Sort scratching my head to at some of the negativity thats flying at such a niche concern! This is going to be the best thing that ever happened to Intel?!! Sounds like some are trying to sabotage efforts to light a flame under their sleepwalking a55es when we all know they have been cruizen incrementally at an arrogant premium! Friends don't let friends cruize at arrogant speeds.
"When was the last time you heard someone bragging about a igpu?"
My friend actually does, lol. I kinda of agree with my friend though I'm not too convinced that (example) an Iris Pro 580 from some high end Intel chip is better than a GT 750M.
I've got a 960M and an Intel 530. All of the Arkham games aside from Knight can play with the 530 @ 720p at 60fps. Knight won't even load with the 530... lol
Just tinkering to see what they would do. With my default 960M @ 60fps settings the 530 only produces 6fps.
first you dont have to go intel for 1080p. when you and pust over 150 fps with a gtx 1080 whats the point if your processor has 5% ipc. If you stream on a i7 there goes your lead.
second 5 to 10 percent is hardly brag worthy. Especially when they had to over clock the i7 to 5ghz to do it. I would rather have 8 more threads then 5 percent better ipc any day. ryzen processors are beast at low prices. With the 1600 at 219usd why in the hell would you ever consider and i7 quad core. period. Done end of fn story.
Give them a break. This is the first iteration of an entirely new CPU architecture. Perhaps it won't overclock so well, and perhaps the compilers won't produce the best code for it, but I think it's pretty clear there exist relatively easily achievable improvements to Ryzen, and we'll probably see them in the next generation.
For sure. AMD would have been run by imbeciles if this entirely new product line does not have room to grow. This is business 101 practiced industry-wide, especially by storage drives and chips manufacturers.
Morawka I agree most of the games not running on the new api's like Vulkan will have issues only at 1080p. But running DOOM on AMD vs. Intel doesn't matter you still will have more FPS than you need for a 144HZ monitor, and overclocking will give you no value added. New games it won't be an issue, but older games you could run into low refresh rates if you are using 1080p resolution. Anything higher than 1080p AMD usually scores within 1-2 fps of Intel or beats Intel. For the most part AMD is going to give you better productivity, and about the same gaming experience a Intel unless you use a 1080p monitor, and playing games that don't use the new API's like Vulkan.
Narcissistic much? Who knew that it was as simple as AMD buyers are more intelligent that Intel buyers?
The fact is you don't really have anything to base your data on because AMD haven't had a competitive chip since probably before you were born.
The only reason AMD keep chuckin' cores at chips is because there performance just doesn't hold up without them. If they could manufacture a competitive chip for less money I guarantee you they would. That is unless all these years they've been bleeding money on purpose.
1. Ryzen 7 was built to compete with Broadwell in the virtual workflow department. DOLLAR for DOLLAR you are getting 6900k performance for half the cost. (If you got a 1700 and give it a quick overclock to 1800x speeds then nearly 3x the dollar to dollar performance ). So your cost argument is moot.
2. The Ryzen 7 Line is more then competent at gaming. Many reviews (You know... Data that thing you requested) show this.
3. Virtual workstations are a much bigger market then gaming when it comes to chips like this. Why would a company or freelance worker cough up the extra 500 for a Broadwell chip ? With near identical or slower performance ?
For gamers who want to live stream their games or do other stuff besides gaming, the Ryzen 7 makes more sense than a Core i7 7700K. I happen to fall into this category - I game on Steam but I also run distributed computing when I'm away and I would gladly sacrifice a bit of gaming performance for twice the compute performance (the competitive aspect of grid computing has me looking for as many threads I can get). Judging a Ryzen 7 solely based on gaming performance is missing half the picture (or half the cores).
AMD wasn't really in the position to offer anything better; you can't just create and market a new chip in a short period of time. So, whilst they weren't "bleeding money on purpose", they still had to make the best of a bad situation whilst they worked out a replacement architecture. Releasing Steamroller- and/or Excavator-based 6- and 8-core CPUs would have qualified as bleeding money; there may have been more gains realised due to the extra cores and L3 cache, but not enough performance to truly compete with Intel - Excavator is more of a mobile architecture than anything else (the progression from Trinity to Excavator is actually very impressive considering the unsuitability of the base architecture towards mobile platforms).
Ryzen's per-core performance ranges from decently better than FX (most games, especially older ones) to nipping at Intel's heels (productivity and some newer games), and as such you can't use under-performance as a reason for AMD opting for more cores.
AMD's last truly competitive offering was the Athlon 64 X2 back in 2005 and 2006, pre-Core (Phenom II was respectable but lost more comparisons than it won with Core 2), so with that in mind, we're talking 11 years. As you don't know prisonerX's age... I'll just leave that there.
I am kinda puzzled by those clocks. With the number of cores dropping, I'd expect to see the TDP budget spent on higher clocks, which is what AMD badly needs to improve its performance in games.
Yet the quad core chips have much lower clocks than the octa core. This doesn't make a lot of sense.
Unless those overclock nicely it doesn't seem AMD will have anything to address the high-midrange quads from intel, leaving that market niche without viable offerings. And leaving the 5 series to compete somewhere between the i3 and the lower midrange i5s.
The claimed TDP figures ain't looking too good either. The 1400 is basically half the 1700 at almost the same clocks, yet the TDP is identical. I'd expect it to be at least around 45 watts.
Are those indeed native quad core dies, or just defective 8 core dies with disabled modules? Maybe first tier is salvaged 8 core dies, with native quad core parts picking up later? The hexa cores are most definitely salvaged dies.
I think that is notable, and most likely a result of yield management. They're suddenly competing with Intel in performance and process, ie they much more competitive, and they're clearly going to need to use every trick in the book against the massive R and D budget of Intel.
Unfortunately they don't have access to competitive process nodes (well, unless they contracted Intel to fab the chips). GloFlo '14 nm' FinFet is basically 20 nm with FinFETs, or put another way, Intel's process would be called 10 nm using the same convention. That's where the performance gap is coming from.
GF are skipping 10 nm to get to a true 7 nm (i.e. what Intel would also call 7 nm) sooner, hopefully at the same time or even before Intel does.
The process being used by AMD tops out around 4.2/4.3G for ambient use; the V/F curve just goes mental after 4.1. It's never linear once you go past the sweet spot, even if you have extra power budget to play with.
Don't forget not all chips have the same V/F curve anyway. Top end chips are the best of a bunch - they're binned to segments. Take a look at the range of some of Intel's designs.
Well, they should at least target closer to 4 ghz.
Looking at intel I see the max frequency rising as the core count drops. We have 4.2 for dual and quad core, 3.6 for hexa, 3.2 for octo and 3 for deca.
I think what you are identifying is proof of very agressive binning. Their design is slow and wide, and only the best silicon can handle hitting 4+ Ghz.
I couldn't care less about higher clocks. This tells me that the process they're using is much more efficient in lower/average-high clocks, which means that mobile parts should be pretty damn power efficient, yet very performant compared to Intel. Can't wait for Raven Ridge, hopefully there won't be any delays.
It's the exact opposite of what u are saying actually. The reason why intel overclocks better is because they have a much more efficient process technology than globalfoundries. Don't excpect AMD to be competitive in the low power market. They'll be far behind intel in terms performance per watt in the low power market at least until globalfoundries developed a more competitive process technology which won't happen anytime soon.
My guess is that since most apps cannot use 4+ cores, if AMD had the R5's at the same or higher clocks than the R7's they would be cannibalizing the sales of their R7 line
Not really, because for the things that zen really makes sense, such as rendering, the octa core will dominate higher clocked quad cores, just like it dominates the 7700k.
Higher clock quad core parts will not cannibalize amd's own market, they will win them some of the gaming market. Well, to be honest, if you game at 1440p or more, 8 core ryzen is perfectly acceptable, it only does poorly at lower resolution. So not really the "gaming market" but the "low resolution gaming market".
All Ryzen's except 7 are salvage dies, Ian addressed this in the article
"So when dealing with a four-core or six-core CPU, and the base core design has eight-cores, how does AMD cut them up? It is possible for AMD to offer a 4+0, 3+1 or 2+2 design for its quad-core parts, or 4+2 and 3+3 variants for its hexacore parts, similar to the way that Intel cuts up its integrated graphics for GT1 variants. The downside with this way is that performance might differ between the variants, making it difficult to manage. The upside is that more CPUs with defects can be used.
We have confirmation from AMD that there are no silly games going to be played with Ryzen 5. The six-core parts will be a strict 3+3 combination, while the four-core parts will use 2+2. This will be true across all CPUs, ensuring a consistent performance throughout."
Except that in reality, global foundries is unlikely to provide as many salvage dies as the market requires. So a big question is, will the CPU/motherboards allow unlocking, and how likely are you to have an unlockable die?
3+3 means that it is quite likely that one of the disabled cores is good, but you might not be able to run the thing in 4+3 mode. Also 2+2 (R5 1400) has less performance than 4+0 with the same cache size (thanks to needing the switching fabric to access the cache). Presumably at some point AMD will ship a R5 1400 that is only half a chip, but who knows how many chips they need to sell to make it worthwhile. Those masks are *expensive* and even Intel only makes a few.
Yeah, doesn't make sense. Also doesn't make sense that the 4-core models don't use only one CCX. That would make it more efficient as they don't have to communicate through the Infinity Fabric. I think AMD would be able to clock higher if they made a SoC with only 4 cores, rather than a 8 core SoC with 4 cores disabled. Also would be able to sell a lot cheaper, because of smaller die.
The 1500X clocks 200MHz lower after XFR; it's a bit disappointing but it's not too far behind. As for TDP, that's not power draw but heat dissipation.
The 1500X also has access to the full 16MB of L3 cache over its 2+2 core setup. 4MB of L3 per core sounds very tasty so we will get our first look at how having more L3 cache helps performance.
The 1400X is MIA for now - it clocks higher than the 1500X but has 8MB L3. That might be the one for gamers. Releasing it now would probably cannibalise sales of the 1500X.
Sadly it's a lot more than 5% ST performance. 5% might be the IPC advantage but then Intel easily has 20% higher clocks if you get a 7600k or 7700k. AMDs advanatge here is the unlocked part basically invalidating the entire intel non-k lineup as then intel loses this clock advantage.
Who is buying a Broadwell chip for a gaming build ? These chips are clearly targeted at the lucrative virtual workflow market. (It doesn't hurt that they are quite competent at gaming as well).
MOST high-end gaming rigs out there, to tell you the truth.
Here, try searching 3Dmark. I've done this before just to prove a point - I searched the first FIVE HUNDRED benchmarks (choosing only GPU 1080) for TimeSpy, and I saw mostly Broadwell-EP/Haswell-EP all the way. No quad cores of any kind, no dual-channel memory systems either.
I'd go so far as to say most gaming systems have 8 cores as well, and these are the systems that buy plenty of games at full price: PS4 and Xbox1 consoles.
The gaming world has moved on to the 8 core world years ago, don't expect things to slide back.
The fact that you think games will never make use of the extra cores just illustrates the level of stupidity (and ignorance) that AMD has to counter in order to get a fair shake in the marketplace.
I am one of these gamers who play a modern niche MMOFPS game built on DX9 (or they simply wouldnt get 200-300 gamers running around, shooting machine weapons, plus all their vehicles driving and firing in the same hex) that absolutely needs the best single threaded perf I can get AND wont use extra cores unless the engine is majorly revamped.
By the sound of you hammering on at that poor guy it makes me feel like youd call me EVIL for still having to choose a 7700K after waiting to see the real world Ryzen benches.
So what would you suggest? Buy a Ryzen anyway incase some year forward my games devs use more cores? Will you try to say its ok to loose at minimum 5-8% SCP (and thats before the added perf Ill get with an overclock bump, something else that ryzen silicon doesnt seem able to do well just yet) just because of the warm fluffy feeling ill get buying AMD? I need to upgrade now, what should I buy and why?
I don't know you or your gaming habits MadAd, but my knee-jerk reaction is to think about how much longer you're going to play those particular games and whether or not the single threaded performance difference between Kaby Lake and Ryzen will actually have a substantial impact on your gaming experience in both the short and the long term. The bottom line is that people usually move onward to other games eventually.
If you plan to build a box you'll use for years to come without major upgrades and you think you'll be switching to other games sooner or later during that system's life, then having more cores might be better in the long run since it really looks like we're standing at the cusp of a big push for more-than-4-core gaming.
On the other hand, if you're like me and you sit on the same games for a very, very long time and you're feeling the single threaded performance pinch right now, it's hard to deny Intel's lead. If you're that sort of person, you may just not get a whole lot out of those extra threads over the life of your next build.
I really think you're the only person that can make a decision for you. And, on the bright side, no matter what processor you end up getting, 7700K or some Ryzen 5 version, you're still getting a very modern CPU. Just find a 4K monitor and the GPU bottleneck you end up with from driving that resolution should hide any CPU shortcomings in either MT or ST performance. :3
> sit on the same games for a very, very long time and you're feeling the single threaded performance pinch right now
^^ this. Ive played on most every main FPS franchise since 2000 and for 4 years for me there is no other FPS worth playing than Planetside 2.
Other combined arms FPS stopped at 64 making them small arena shooters in comparison. Planetside 2 is all out warfare but to keep up with 200-300 or more people, with any vehicles they bought, in one location of a much larger continent takes major processing power and its very cpu limited with devs putting more and more load on as months go by yes I am feeling the pinch in say, a frame crushing tower battle, and I hate to have to reduce graphics, potatoside isnt my thing :)
When I used to play MMOs, some people were obsessed with making PvP videos. Having fast storage plus the additional CPU cores is only going to help with that.
In fact, I remember seeing tests back then focussing on suitability of various CPUs at running multiple applications at the same time, pointing out the strength of triple-core CPUs (i.e. Phenom II X3) over the more traditional dual-core CPUs (Core 2 Duo/Phenom II X2). Don't really see those nowadays.
Children like yourself think it's "my team versus your team." Why would I care about AMD?
What I do care about is that the monopolist corporate criminal Intel is screwing consumers and screwing the industry. But unfortunately the world seems to be full of dullards whose only response to being reamed by Intel is "Please Intel, may I have another?"
@prisonerx - browsing the thread it seems that as soon as you get involved it all degenerates into a flame war. Can't you post without taking things to extremes and calling people names? That wrecks it for the rest of us.
Wow, so many preciously sensitive flowers here! I'll be sure not to say anything controversial from now on, lest your feeling get hurt Meteor2, I know you find it hard to cope with people who disagree with you.
Yes because they are revision over revisions, what's Kaby Lake, stepping 9. Ryzen can only get better through time, the difference in gaming others than 1080p is minimal if any, and nobody buys a $500 CPU to game on 1080p, likewise an i7 for 1080p is wasted cash.
I'd like to see a clear-cut list of consumer workloads (and games, when *not* bottle-necked by the GPU) which noticeably scale well with more than four threads.
Unless it's a real good list, Intel wins. Higher IPC, higher clocks, same power consumption.
Really, any professional use case. Even if you operate entirely in a web-based CMS (CRM, webmail, Google Apps, Office 365 etc) in Chrome, with each tab grabbing a thread, that helps keep things smooth.
Move to fat clients (Outlook, Excel, a browser, a music player in the background) and then have a larger workload to shift to (say, doing some light photo editing) then having more threads available = less pissing about watching Outlook chug it's guts up as you search for that invoice from six months ago.
As for me? Thunderbird, Chrome, Firefox, Opera, half a dozen terminal sessions, and a couple of VMs on the go at any one time that I would like to be able to throw more resources at. Threading is very, very useful for me, and for any serious professional workload (image and light video editing or CAD work where the author cant' justify throwing £2k at a workstation or just plain can't get the budget for it at work, etc).
And is very, very useful for everyone who isn't a gamer. Which is, in short, almost everyone.
Not really any professional use. For normal use (office, chrome, excel) an i3 works fine, even with stuff like a music player in the background. If you are an AMD fan and just argued how a Rizen is "good enough" for gaming then the same applies to an i3 and any standard office work.
You only need all those cores if you are doing something compute intensive (compiling, running lots of VM's, some image/video editing - although a lot of that uses the gpu not cpu).
If we're talking "normal use", then it comes down to the fact AMD is cheaper. For normal use we can also throw out overclocking, so it's just stock clocks. In that case you can get a true AMD four core for the same price as an i3.
FINALLY someone who actually gets it ! This comment section seems to be full of rubes who think 1080p performances on their 144hz over priced gaming monitors is the only metric by which you judge a chip.
Well this is it. For web and basic office apps an i3 is fine. What scales with more than four threads? Processing big batches of photos in Photoshop? Finishing a video in Premier? (Having never done that, I don't know what that means.) I'm not convinced x264 and x265 scale too well; I've not seen benchmarks of sensible use-cases like transcoding four or five DASH streams or converting a h264 library to h265.
VMs I get though, and running compilers, if you're a developer. I'd say those are professional use-cases though where your equipment may well be provided by an employer. I'm more interested in consumer workloads (such as amatuer vloggers).
Ouch! Most of the PC's at my work are i7's that are maybe around four years old. It's quite overkill, but it's nice to not be bogged down at all, as with my job we use many programs/windows consistently all day. Lots of Outlook, browser tabs(intranet portal), MS access programs for sql database browsing and interaction, CRP system, excel, etc.
I'm guessing you flip burgers for a living. In reality competent programmers query the number of cores, create a proportional number of worker threads (for heavy duty processing only threads, it will usually be 1:1) and feed them tasks. Creating hard coded critical infrastructure is just incompetent. 4 isn't some magic number and neither is 2 or 8.
Or let's say that you can't chop up your work into bite size pieces and that there is a strong sequential dependency in your code. In that case a competent programmer would pipeline the work and the question becomes how to allocate work to each pipeline stage. Of course there is no way of exactly dividing the work so that each stage keeps a single core busy, so you just divide the work at natural boundaries where it's most efficient and make sure you have (many) more stages than you have cores such that roughly 2 or more whole stages run in a core. Bigger stages don't scale up or down well in terms of cores. You can't make assumptions about how many cores you have and have your code only be efficient in a certain number. That doesn't work.
The idea that software, including games, is design for some specific number of cores, is a myth.
Love the response. I boxed pro for a couple of years, and people would 'call' me on it from time to time, usually keyboard warriors, but once in a while in real life also, as I'm not 10ft tall I guess.
And I'd have to ask, well, just because nobody *you* know is a fighter, doesn't mean I can't be, or that nobody else is?!? But I like the way you delt with it, by pushing the question back to them.
I'm glad you like my work. They'll be plenty more where that came from, so you're in for a treat!
As for the rest of your comment, what the **** are you on about? Are you trying to tell me you're some kind of violent moron, and that I'm a coward? OK! LOL!
Does it really bother you that I have a cogent argument and that people who I call out have no comeback? Is that why you're spouting that nonsensical drivel?
I've met more violent people than you'll ever know. Not just run of the mill violent morons, but also the psychotic and violent. And even though I've a pretty large and strong guy, I've never been in a fight with anyone. Why? Because I think violence is really, really stupid and solves nothing and just creates more violence. Also becuase my size gives a lot of people pause, even the mentally ill. I guess that makes me a coward, right? Or does it just mean that I actually have a brain? Something for you to think about.
So yeah, I'm not sure what you're trying to say... that if I was making my points in person you'd punch me in the mouth and then I'd start crying and tell you that you're right? Is that your fantasy? LOL!
"Unless it's a real good list, Intel wins. Higher IPC, higher clocks, same power consumption."
Ok, then. BYE! Not sure what you and others like you are even doing here. All I see is you pretending you are looking for something better. We get it, Intel is better at gaming. I don't dispute that. How about just enjoy your Intel product and piss off. For the other 50%, why are you even engaging people that obviously never had a plan of purchasing AMD?
I never suggested that you dispute that Intel is better at gaming. I'm geniunely curious as to what workloads people are doing on their home PCs which would benefit from more cores.
All I see is someone says something about AMD and you generally respond, "no, Intel is better." Ok, fine. You are absolutely sure your choices are Intel work for you and there is nothing AMD currently offers that will fit your use case. There is no reason to even be here. I just went to Intel's last CPU article here, http://www.anandtech.com/show/11083/the-intel-core... and any mention of AMD is marginal. There is no AMD fan constantly berating any Intel achievements but the reverse is not true here. AMD is finally competitive(not the same as saying they are better) and you would think an Intel hornets nest has been kicked.
They spent so much money on their Intel CPUs so certainly nothing out there can equate it's performance. And if, god forbid, there is some program or use case that benefits more from AMD then nobody is using it and ..... it is not gaming!!! Because YES, 99% of computer users are gamers and if not, then game developers. Rabid App.. I mean Intel fans :)
Playing games while recording TV while compressing the last thing I recorded while serving media to my kid's tablet.
That's why I bought an FX 6300. Each core wasn't faster than Intel but it's load capacity was higher. 6 is greater than 4.
If you measure by a single metric then my laptop with an i5 6300HQ is better than my FX6300. If you run a dozen apps at the same time the FX performs better.
Consumer workloads are pretty much Office+everything that runs on a Chromebook. Intel Pentiums 4560/4600 are pretty much the way to go (or you could use AMD chips, they aren't likely to notice the difference). Make sure there is an SSD (and it doesn't have to be very big) and you are done.
I don't think ARM and Windows/Office are quite ready to work together, but for the users who aren't heavy Office users, Chromebook works wonders.
But for anandtech readers, I'd recommend any of the Ryzen parts without question (unless they really can only afford the Pentium chips. They are really good for the price).
First of all let's get the facts straight. Before ryzen, the AMD CPUs gave you something like 50% of single core performance of an Intel CPU. As a result, a quad core AMD CPU could barely keep up with the dual core i3, but ONLY in heavy multithreaded tasks. In single-core performance, AMD would always fall flat on its face and to be competitive even with a Core i3.
Second, why should the gamers even care much about the CPU performance? These days, it's all about GPU. You gotta spend 400-500USD on a GPU before you start realizing that now the CPU is bottle-necking you. Otherwise, the performance of gaming rigs with either AMD or Intel CPU is pretty much similar.
Well with excellent $400, $500 and $600 GPUs available from Nvidia and hopefully soon AMD, I'm very interested in where the CPU becomes the limit. I suspect it's i5 level. So for $240, what's going to be better, a faster i5 or a slower R5 with more threads?
And given the way GPUs are developing, these questions may well become relevant with just $200 mainstream GPUs in a few years, well within a CPU's lifespan.
Nope, in 2 years new games will drag those new GPUs to the ground anyway. Games will be GPU limited for a foreseeable future and only resolution point will climb through 1440p up to 2160p with falling prices. Only games that require very high fps are bottlenecked by Ryzen cpu and even CS:GO does well enough when core bind to CCX0 or CCX1. MOST gamers play their games with graphic settings tuned for 60-100fps and in that space Ryzen does well enough.
Rome II Total War, Attilla Total War, World of Warcraft, Starcraft II, etc.
This is why gam3rz are interested only in IPC, and this is why I, and millions of other gamers, will never buy lame-ass AMD CPU's as long as their single-threaded performance is worse than that of an i5-2500k from 2009.
Sadly though most review sites[ and often intel fanboys] ask the wrong questions and then generate dumb answers . The criticism of Ryzen's gaming potential is 1080p performance. The argument is that 95% of gamers use 1080p or lower resolutions . OK thats fair enough . Also fair is a reasonable speculation that 95% of those 1080p monitors run at 60Hz . 60Hz is the exact same thing as 60 fps . A 60 Hz monitor can NEVER display more than 60 fps . If a graphics card sends a higher frame rate the monitor drops the frames and never displays them.
So the tester logic of using the most powerful graphics card with a cpu and running at 1080p to show a cpu bottleneck is bizarre and stupid . It lacks relevance . Unless the game is playing at less than 60 fps the user is experiencing the exact same thing : 60 fps .
If folks with a 60 hz 1080p monitor buy an intel they get the same gaming experience and worse [ often much worse] encoding and multitasking and streaming
Well stated, but sadly most gamers aren't engineering graduates and couldn't grasp the V-sync and frame rate pairing. They couldn't understand the difference between volts and volts room mean squared [Vrms] and how they play into hardware circuitry.
If they were engineers they'd spend very little time gaming and most of their life creating solutions.
I feel bad for computer engineers. The create these works of pure genius, efficient, fast, etc. Then it gets handed off to some lazy assed coder who does the bare minimum shit job he/she can that gets it to work.
I have a buddy who works in federally funded research that complains about how they just throw more hardware at problems, instead of recoding the software to make the program more efficient.
It's kind of sad that there is that much of a disparity between hardware and software engineers.
Performance of i7-2500K is higher only when overclocked to the max. But sure if you are one in a milion (/s) that play those ancient games, stay on Intel. And please keep away from threads such as this.
1. Tests where intel winds happen when they turn off cortana so windows doesn't stress extra cores. This is a biased test designed to help Intel whose chip is worse.
2. In the real world no one can see more than 30fps, games where intel "wins" are over 100fps, so another way to bias the test toward intel, which is not up to par.
3. The lowest FPS favors AMD - this is what is annoying in games, when you get a computer to lock up, it won't lock with extra cores and better threading with AMD which is superior.
4. Who buys $320, and then games on 1080 with a crappy graphics card?
It is patently obvious that these sites are paid off by intel, where they try to tune the tests so that Intel will look better when in fact it is not. Anand, how about this. Do a real computer build and test games with everything on? Let's see who's better?
Maybe if you come up with a test that is not biased you'll get more traffic vs competition and other sister websites? Just a thought...
1. Nobody test real life situations. If they did, you would have Skype, Discord, maybe Spotify and Twitch, probably many others in a background, maybe also Steam overlay with some webpages open. 2. Sorry, in real life even idiots can see 60fps, just open youtube and take a gameplay at 1080p30 and 1080p60. Good players can easily see differences between 60 and 100fps, e-sport fanatics can see (in blind tests) differences up to 150 fps. Fighter craft pilots can see, and recognize the content of, images displayed for 1/200 of a second. 3. More cores, less stalls? 4. The problem is that people does have great GPUs and the better GPU the higher quality CPU is needed. Don't whine about paid sites, it's unbecoming. Create better tests yourself and interest people in them if you are an expert instead of pointing fingers.
Check GoFlo's capacity. Of course since self-proclaimed gamers are a tiny sliver of Intel's market, GoFlo might be able to supply them all. AMD can't "beat" Intel in sales (and couldn't when they held the upper hand across the board with the Athlon), they just don't have the manufacturing capacity.
I'm pretty sure Steam makes this data publicly available. You might have to watch the deltas (which might include retired bulldozer boxes replaced with Ryzen) but it should be close enough.
Gamers need that 5% better single threaded performance because 2017 games haven't progressed since 1980 ! Apparently developers haven't heard of multithreading, multicores, 4 channel memory, HSA, DX12 etc.
Hello from Phuket. Yes, its high here, my AW18, which usually runs 4x 4.4GHz in the UK/USA, in Thailand is x44 / x43 / x42 / x41, or even x43 / x42 / x40 / x40, or I throttle from time to time.
@ Nagorak
Last year, we had a record drought, longest in 50-something years, and I believe Bangkok hit 42 or 43C. Water trucks were everywhere, visiting resorts and topping up wells / tanks / pools. It was the first and only time I had to call the water truck myself.
If I go out, and return to the condo after a long day, the temp says 44C indoors. The villa is not too bad, due to its high ceilings, but 37C is still typical on return.
The 7600k looks way outclassed by the 1600x and will have the full 16mb L3(?) cache. Accordingly, seems like it must be a 1700/1800 with defective cores.
Also good to finally have some excitement between $100-$200 price point.
Correct me if I am wrong. I remember Intel no longer do much Die Disabling. So Dual Core is not a Quad Core with 2 Core disabled. Would AMD be in disadvantage when it comes to BOM cost?
There was a recent discussion in LLVM 4.0 release, some developers suggesting they could get another 5 - 10% performance out of Ryzen.
Probably yes, but by having one silicon design you (a) save money on multiple sets of masks for each design, and (b) make deployment of various SKUs easier as there's only one design to worry about (you just pick the bin you need). AMD probably pays more per wafer than Intel does anyway, given Intel is vertically integrated. Depends on the cost difference of manufacture.
If you get one of these processors, buy 3200MHz DDR4 RAM with Samsung B-die chips.
Anyone else notice the 1500X and 1400 L3 cache difference. We know each CCX has 8MB L3 cache, so is 1500X a 2+2 part (2 CCX) and 1400 a 4+0 part (1 CCX)?
The choice of 3/3 instead of 4/2 for the six core part makes sense, the use of 2/2 instead of 4/0 on the four core part is pretty interesting though. I wonder just how big the performance penalty is crossing the Core Complex. One thing that using the 2/2 setup does provide is access to both CCX's cache. Whether they provide the full 8mb per CCX on the lower core parts is still unknown, but it'd be pretty unusual for a single core to have 4mb of L3 cache.
I remember Intel saying that there was a point of diminishing returns (or at least no further increase in performance) on the size of L4 cache on the Broadwell-C parts. For Skylake they reduced it to 64mb I think. I know L3 and L4 are treated differently, but I wonder if having 8mb of L3 for 2 cores makes sense, or cutting it down a bit actually helps.
Either way, these look like a pretty decent offering by AMD. I really wish they had more IPC or clockspeed overhead to overclock them to the neighborhood Intel has occupied for years. But if you're using lots of threads and don't care about clockspeed or IPC, these are great processors. Very interested to see the eventual market share the 1600x takes compared to the i5 series now. Also interested to see what Intel does core-wise after Coffelake....we have their 6-core roadmap but in two years they're probably going to need to offer a mainstream 8 core / 16 thread processor, even just to keep marketing parity.
How comes Ryzen has broadwell IPC when AMD claimed they surpassed their goal with 52% over bulldozer (originally 40% over bulldozer which put them at bwell level). Other websites put Ryzen at skylake / kaby lake levels.
The 1600x and 1400 seem kind of pointless for now. Save $30 and OC the 1600 to 1600x levels. On the other hand the 1400 is too crippled (also less L3 cache unless typo) and hence the $20 more for a 1500x totally worth it.
AMD look at the market!!!! counter in every BIN 3-5-7 an equal core then the most sold counterpart of competitor with high ghz and lower price and there is no intel choice left.... dont push on higher core count and features, done that before and it did not work out that way. missed opportunity to ditch the full competition line up.
I don't understand the price/perf chart. The 1600 has slightly lower clocks but 50% more cores/threads, yet it sits at almost the same level of performance as the 1500X. That's a clear indication that the chart is strongly favoring ST performance.
The project list is high, and available time and hands is low. Especially when I cover a wide range of topics solo. The spirit is willing but unfortunately I don't have access to Narnia xkcd.com/821/
But that's my point exactly, isn't the 1700x/1800x's performances limited by the fact they get too hot too fast, whereas the R5 line-up could potentially be in a better position? I know voltage is factor too, but if there's less heat, technically you could increase the voltage (just that little bit more) resulting in more OC'ing headroom.
Ryzen is limited with its 14nm low power process. Voltage needed for 4.1ghz is close to critical 1.475V so max OC will be the same for 1600X and 1400X as it is now for 1800X
I suspect that No Ryzen chip is going to be a great overclocker . Refinements in the gen 2 chips hopefully help , but till then 4 Ghz is probably a practical limit no matter how many cores
Am I the only one finding I a bit odd that the 4 core ryzen is 65 watts?. you have mobile intel chips at similar freq that is under 45 watts and has a gpu in it. In short all intel as to do is drop the GPU and replace it with more cores to beat AMD in the core count, while it already has the advantage with the IPC and the frequency department. How long do you think intel will come up with one, atleast one that has a very low performance video out only non gaming GPU.
Don't compare mobile chips with desktop ones. I don't think Intel has any desktop quad-core that is at 65W, only dual cores. So the quad-core or even six-core at 65W looks very good.
AMD hasn't even published their low/ultra-low TDP Ryzen's yet. If their high-end models tops 95W, I'm pretty sure the upcoming low-power models + maybe APUs will be pretty competitive too.
ARM servers aren't likely a real danger, but Intel keeps ECC and similar features on the i3 for that reason. Once you add the cost of the ECC memory, enterprise drives, and similar features, the loss of performance [with ARM] costs more than any Intel Xeon chip. Zen (and especially Naples) is likely to be a different story.
IBM is also a current threat. But since anyone with an ARM license and dollar signs in his eyes can make an "ARM server", Intel has to keep the i3 ready as an example of a much more effective "low end server".
Nice to see a consumer level CPU with ECC Memory support. Too bad Intel misses out on this market desire. Especially since Google has proven the large number of memory related errors in desktop level PCs.
Ian - I have read that the Infinity Fabric speed is based off RAM speed (2:1 I think?) and that faster RAM drastically reduces the latency when switching between each CCX. Others have attempted to test this but I would be far more trusting of your work. Would you please take a look at this as part of your review?
1) The hero Jim Keller doesn't know the performance impact on using 2 or 1 CCX? 2) None of AMD senior engineer knows the performance impact on using 2 or 1 CCX? 3) None of AMD senior engineering managers knows the performance impact on using 2 or 1 CCX? 4) None of AMD engineering executives knows the performance impact on using 2 or 1 CCX?
Conclusion: AMD engineering executives are just for delivering presentations and holding chips for pictures!
Potentially even worse:
Why 3+3, 2+2? Can't yield on a complete CCX for 4+2, 4+0?
As a chip designer, I have been anticipating 2 CCXs for 8- and 6-core chips but 1 CCX for 4- and 2-core chips, but they should know their design and business/cost better!
Sure, I don't but dragging people around makes me wonder:
Sent reviewer BIOS that had no effect : )
Asked reviewer whether the Windows was newly installed : )
Now,
"AMD now doesn’t necessarily recommend turning off simultaneous multithreading to improve gaming performance, a tweak that the company suggested to reviewers during Ryzen 7’s testing process." : )
For a 8-core die, no more than one or two could be bad, or you have a shitty process; be reminded the yield issue likely is in SRAM.
"If you had a chip you could sell as a 4 + 0 , or throw it away , then not selling it would be the stupid part ."
If you had a chip you could sell as a 4 + 0, then having a shitty process would be the stupid part, get it?
Take care of a problem from its source, not wasting resources to take care of (sell) the shits it produces, get it? Have they learn the Bulldozer lesson enough: We give users more physical cores with lower performance, higher power consumption, higher cost : )
And if the fault in a die is not in the actual cores , but in the interconnect between CCX's?
You would not make an ideal employee anywhere. You dont seem to be able to separate your emotive fanboyism from a logical analysis of the possible scenarios
This year will be very refreshing in CPU/GPU markets. First Ryzen 7, now after only one month Ryzen 5 and appr. one month of that R3 and Vega announcements. Finally we have some competition and change to the normal tick-tock releases! Also I'm surprised how narrow minded people tend to be. First everybody are whining how expensive Intel and Nvidia offerings are and then when you finally have some competition which could change the situation, people are whining how the solution is lagging a whopping 5% in some test and because of that piss on AMD. You can look Intel's and Nvidia's profits in the past comparing to AMD's and make a conclusion if those two giants are really taking all out of the monopoly they have. Also we have only a small idea how much there are Intel optimized (or designed to fit Intel's architecture) software on the market and without any change there will be near zero possibility in the future to any competitor to show up and undermine that monopoly. It's a near miracle that AMD is performing this well with all new architecture and product lineup against Intel...and comparing the R&D budgets against Intel and Nvidia it very much is. Personally I'm now waiting the Vega announcement and after that I'll build the "reddiest" pc there is. It probably don't have the ultimate best IPC, but it will have probably the best bang to the buck and will perform more than well in majority of the situations now and in the future. This is maybe little bit sentimental but the extra what I get is the feeling that I have supported AMD and haven't been forced to buy the monopoly crap from Intel and Nvidia like it has been over a decade now.
Why would anybody do that ? All it would prove is that a CPU like the R5 1500 is perfectly fine for any mid range card. But that would not be in their masters interests, which is to sell as many 7700ks as possible to gullible folks.
People commenting here only thinks of themselves. Like they need high fps their cpu can do the same as Ryzen can do.... jeeezz..
Think of it this way.... AMD has given the us a cpu that is more affordable than intel and because of this the third world country can easily afford this type of cpu coz we cannot deny that those people there are creative and more intelligent than people with a pc having high fps and expensive waterloop.
Ryzen is not only for gaming (which is decent by the way), it is also for content creation and other stuff ( hacking maybe... lol). And the success of AMD will come from thise countries.
Just be happy with your intel cpu and let the loyalist enjoy their Ryzen. And do this argument after 4 years about which cpu is still relevant at that time.
To see an image with no or little movement there is no to very little discernible difference in the smoothness of the image at 60HZ, 120HZ, or 144HZ with solid FPS. It's when you have a lot of quick movement, and a drop or drops in FPS that images become very noticeable at these different frame rates. The refresh rate of the monitor also drops significantly as well when FPS than the monitors refresh rate. 60HZ drops to 30 HZ. This will be extremely noticeable and decrease the perceived smoothness of the image. 144HZ dropping to 72HZ not as noticeable. A video card consistently dropping frames below the monitors refresh rate has in impact on the smoothness of the video. Than there is the difference in color on high fps TN monitors as opposed to IPS monitors. IPS having better picture quality. 60HZ on a 144HZ monitor is going to look better than 60HZ on a 60HZ monitor when it comes to smoothness of fast motion. Move your mouse curser in circles real fast you can see the gaps at 60HZ. at 120HZ or 144HZ the gaps are still there but much less noticeable, because the image is being drawn twice or more a second. I found a good explaination here https://www.rockpapershotgun.com/2015/02/12/best-1...
There is actually a very perceivable difference between 60Hz and 120Hz motion on a monitor. Just like there is a noticeable difference between 30Hz and 60Hz, doubling the amount of images displayed per second is hard to miss. Granted, I doubt there is much difference between 120Hz ans 144Hz, but to say that there is no advantage to higher refresh rate monitors is absurd.
I wouldn't mind a RYZEN 5 that's a 4 core (no SMT) with a TDP of 45-50W, OC to 4GHz. And I don't mind paying more for it. Although AMD might convert it to an APU, with HSA, HBM2 etc all thrown into this monster. But this SOC would be more compatible with the mobile market, who knows. Or the APU might come out with a Vega iGPU ???
It is always amazing to me how rude people are on the interweb. There is no reason for most of it.
The bottom line here is that AMD has done an amazing job getting back in the game. The 1600X is an excellent value with more than enough performance for most folks, most of the time. Thing is no one is a "fanboy" for buying or wanting a processor that fits their needs.
Were I still doing HPC and video trans-coding I'd have snatched up a 1600X or perhaps might have even splurged for one of the 8 core models. Intel would not even have been considered for such workloads.
If gaming was still a priority I'd have most likely leaned towards the i7-7700k which is better in lightly threaded apps and cheaper than the Ryzen 7 chips as well. Maybe the i5-7600k if money was an issue.
The point is that you should by the processor that is best matched to your workload/budget. There is no single best CPU for all applications. These new AMD chips are all over the map, crushing it in some benchmarks but lagging behind in others. It should come as no surprise that some people love Ryzen while others remain less impressed.
As it stands I'll buy buying nothing from either company, got too many mouths to feed and honestly 5 year old Ivy Bridge stuff is not that bad.
While I'm babbling... for typical use I think I'd rather have an i3 (or similar) with an SSD, than an i7 or Ryzen 7 with a mechanical HD...
This is why intel has been annoying the hell out of me for the past 5 years or so. They could have given us an 8 core desktop chip without an igp, for the same price as an i7, and they could have done it 5 years ago.
The desktop chips are really just overclocked/volted mobile chips, that they have been milking on the desktop for the last 5 years.
You want more cores you pay 3-6 times as much for the privledge. Which is nearly all profit, considering once you drop the igpu, it doesnt cost anything more to add more cores instead. (granted the 10 core chip costs a bit more to produce, but not twice as much, and certainly not 6 times as much)
" It is possible for AMD to offer a 4+0, 3+1 or 2+2 design for its quad-core parts, or 4+2 and 3+3 variants for its hexacore parts, similar to the way that Intel cuts up its integrated graphics for GT1 variants. The downside with this way is that performance might differ between the variants, making it difficult to manage. The upside is that more CPUs with defects can be used.
We have confirmation from AMD that there are no silly games going to be played with Ryzen 5. The six-core parts will be a strict 3+3 combination, while the four-core parts will use 2+2. This will be true across all CPUs, ensuring a consistent performance throughout."
Ta for clarifying that.
Could be interestingconsequences. fewer cores = less heat & lots more l3 for each core.
It begs a fascinating question tho.
How will they arrange the raven ridge apu?
the ~a10 apu had about half of the chip space each for the gpu & cpu.
Ir seems safe to assume space will be tight on the coming zen/vega apu.
u would think it may be attractive to use the second l3 cache space for gpu circuitry, in which case we would have 4 core zen using a single l3 cache.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
229 Comments
Back to Article
prisonerX - Wednesday, March 15, 2017 - link
Hmmm, AMD gives gamers 50% more cores for their expensive chunk of silicon, while with Intel you get... integrated graphics.But gamers are generally so stupid they'll keep buying Intel, becuase it's got 5% better single thread performance.
Makaveli - Wednesday, March 15, 2017 - link
Gamers generally use Dedicated GPU's not integrated graphics. When was the last time you heard someone bragging about a igpu??Secondly if you use highend GPU's at low resolutions (1080p) for possibly higher Hz monitor.
You have to go Intel right now. At 1440 and greater where you have to worry less about cpu bottlenecks you are good with ryzen.
And lastly most games are gpu bound not cpu that's where you are best spending your money.
prisonerX - Wednesday, March 15, 2017 - link
I know, that's what makes gamer's continued love of i7's so puzzling. If you look at the die of a Core i7-7700K for instance, the IGP takes up more space that the processing cores! So Intel could have given people twice the cores for the same cost in terms of silicon, which is the determining factor.Now, I understand that the i7 gave better performance, but now with AMD delivering more cores that are competitive, revealing how Intel has treated them like rubes, gamers still vilify AMD and back Intel.
It's puzzling.
Kaggy - Thursday, March 16, 2017 - link
Well most gamers need really large ATX cases, blinking lights on mouse and keyboard.And every component has a "gaming" attached to it.
So generally gamers do not look at logic much if they belong to that class of users.
Drumsticks - Thursday, March 16, 2017 - link
That's a mighty high horse you've got there purely for disagreeing with somebody's aesthetic choices. Do you sniff at people's Non-Black vehicles or painted walls, too?prisonerX - Thursday, March 16, 2017 - link
Don't worry, no-one's trying to take away your blinken lights.TacticalTaco - Saturday, March 18, 2017 - link
Aesthetic changes like that will often reduce performance in one way or another.theuglyman0war - Monday, March 20, 2017 - link
Agree... just like a car, love the interior UV led accents tucked away on the new Mercedes s550Color scemes are just as valid a design consideration if u r going to have a design consideration.
But anyone with a holier than thou attitude definitely should be forced back to the days of beige boxes. ( they can get a grey chairman mao pajama uniform of conformity to match. )
Square black with no lighting accents does not mean u r the ruler of taste! :)
zavrtak - Tuesday, April 4, 2017 - link
Not really, there were some tests conducted and gamers actually just buy all that funny stuff with the gaming in the name. That simpleton marketing trick actually works.And so do 20 cent LEDs. It is really, really puzzling for me too. But that do I know, I have 2x 200 bucks gaming keyboards, blinking LEDs in one of my PSUs, etc … while complaining about gamers "falling" for cheap advertising tricks.
RMSe17 - Thursday, March 16, 2017 - link
A lot of gamers are like me, and simply care about best FPS and shortest load times. I will buy whichever CPU/GPU/SSD delivers that, within a reasonable budget. Hating a company is in this case illogical.0ldman79 - Saturday, March 25, 2017 - link
Give this man a cookie.I'm the same way.
Every few years the best bang for the buck changes. It is silly to be brand loyal for any reason beyond that brand meeting your needs the best.
FMinus - Friday, March 17, 2017 - link
I would consider myself a gamer (whilst not as much as I was 10 years ago) I still game each evening a bit. And I can't stand flashy things, I remember even going this far as snipping LEDs from fans that came with a chassis back in the day.Now I'm perfectly happy with my all black aluminum Lian-Li A76 tower where I just throw in random component, it's my third system in the same chassis, nothing glows, or blinks except for the motherboard lights, and I like it that way.
albert89 - Friday, March 24, 2017 - link
I'm the same. The rig should be stealthy and silent.0ldman79 - Saturday, March 25, 2017 - link
Agreed.I've got a gaggle of big ol' case fans with the LEDs cut...
albert89 - Friday, March 24, 2017 - link
Most definitely.beginner99 - Thursday, March 16, 2017 - link
I don't think it's puzzling at all especially given reviewers mostly do stupid benches. There are few games that actually benefit from 6 or 8 slow cores over 4 fast ones and that mostly only applies to multi-player like BF1 which hardly ever is benched. So the 7700k especially OCed still comes clearly out at the top.And as has been said for high refresh rate gaming a fast CPU is a must as it must pump out 120+ frames per second. Besides that the launch issues with Ryzen and the fact it is underperforming in gaming compared to other benches will make people hesitant to buy. push them towards the known factor, eg 7700k.
AS118 - Thursday, March 16, 2017 - link
I agree. I'm an AMD fan that'll probably get an R5 1600x, but everything you say about the 7700k is true. If I had a 144fps monitor and played at 1080p, I'd probably get a 7700k. I don't get why some people knock Intel's i7's unnecessarily. They have a use, and it's not stupid to get them over a Ryzen prodcut.niva - Thursday, March 16, 2017 - link
If the usage is just for games it may be exactly stupid to get Itel over the new AMD product. As mentioned there should be issues expected at launch. Fact is that the issues identified so far are nothing major is pretty damn impressive. Look at performance per dollar, it appears that the AMD chips aren't even being harnessed properly yet, games and heck maybe even the windows kernel/scheduler may need some optimizing, yet its competitive. When you factor in the cost of the AMD chips I'm pretty damn impressed. Given this, the long term potential of the AMD chips simply beats the old Intel chips.Now, if you're using the system for things other than gaming, the Intel chip may have other intangible "benefits" attributed to it, such as stability. Those are going to be very difficult to understand until much more data is available a few months/years down the road.
ibudic1 - Thursday, March 16, 2017 - link
Not only that, but AMD does better minimum frame rate with higher core count.Did you guys notice that tests turn off Cortana, software updates, AV, etc? Who does that in real life?
Extra AMD cores take care of that stuff while you are playing.
I don't get it Intel is, in real life, slower.
prisonerX - Thursday, March 16, 2017 - link
That's a good point, gamers whine that games don't make use of extra cores, but there's probably a couple of hundred or more threads on a desktop at any time.Alexvrb - Thursday, March 16, 2017 - link
Agreed. The extra cores would be nice. Someone already mentioned it but multiplayer matches (which are near impossible to test with any scientific consistency) have different demands than offline benching. A lot of the multiplayer games I play have "stuff hitting the fan" moments where everything is happening at once. In situations like that, I'd probably like an extra couple of threads to shunt background processes.Granted, Game Mode might help alleviate some of that, especially if you have slower CPU and manually close anything that Game Mode doesn't have domain over (kill your background torrents, close all your browser tabs, etc). But it's not a silver bullet and I think in a lot of real-world situations the extra cores are better than an offline "everything that you normally have running is disabled" lab test on a fresh OS install would indicate.
mat9v - Friday, March 17, 2017 - link
What is more, use Process Lasso to bind all those background tasks and programs to cores on CCX1 instead of CCX0 and you get much better chance of your game threads remaining on CCX0 where it belongs because Windows scheduler will see cores on CCX1 running more tasks so it will schedule the game to run on CCX0rtho782 - Friday, March 31, 2017 - link
I get your point but your specific examples are poor. I have cortana disabled, windows update when I click it only, and no AV beyond security essentials.Now, if you were to say extra things like keyboard/mouse software, teamspeak, etc etc...
ibudic1 - Thursday, March 16, 2017 - link
You do realize that it's pointless to have more than 30fps for any game right? Your eyes can't detect this.Your hand movements are at 6Hz, sooo any benchmark where low frame rate is above 30fps is stupid. Who cares?
The only time it's important to be over 60, like 61 is if you are doing 3D.
You are wasting money on intel. Very simple.
fanofanand - Thursday, March 16, 2017 - link
This is a false statement. The human eye can detect well beyond 30 FPS. That's like the people who say you can't see past 720P. Well maybe a handful of people can't, but a ton of people can.BrokenCrayons - Thursday, March 16, 2017 - link
People can discern framerate differences above 30 FPS and even well above 60 FPS. That doesn't mean that 30 FPS won't feel fairly smooth (I personally find 30 is perfectly acceptable, but 40-60 is nicer and I'm annoyed if I'm playing a game at 15 FPS -- From the Depths trundles along at 10 FPS on Quadro NVS 160m and it's annoying, but I still play it like that until I can be bothered to get a faster laptop.), but there are reasonable arguments in favor of reaching above 30 including future-proofing and matching up with screen refresh rates.Friendly0Fire - Thursday, March 16, 2017 - link
Are you actually serious, or merely trolling?FMinus - Friday, March 17, 2017 - link
I don't want to be that guy, but the human eye has no cap on frame rates, it really comes down to how well your brain is trained to interpret what your eyes capture. There have been done tests where jet fighter pilots could clearly see a picture of an enemy plane, shown for 200th of a second and name the type of the plane - that alone suggest that we can at least "see" 200FPS and possibly a lot more, clearly depends from human to human, I'd wager that people who play fast first person shooter games, do notice the difference between 30, 60, 120, and 200 frames quite easily.oynaz - Friday, March 17, 2017 - link
"Your hand movements are at 6Hz"You are confusing Pornhub with gaming, I think.
redrobin9211 - Thursday, March 30, 2017 - link
haha nice!redrobin9211 - Thursday, March 30, 2017 - link
Looks like you never played Counter Strike. Frames matter a lot in FPS games specially games like CS.Haawser - Thursday, March 16, 2017 - link
Ok, let me ask you a question; Which of these rigs would be faster-$219 R5 1600 + $390 GTX 1070 (Total $609) Or.. $339 i7-7700K + $250 GTX 1060 (Total 589)
Take into account the difference in motherboard cost and they cost ~the same. So if you had a budget of ~$600 for a new CPU and graphics card, which one should you buy ? As a gamer ?
goldstone77 - Saturday, March 18, 2017 - link
It also depends mostly on the resolution you are using. Anything higher than 1080p then the CPU doesn't matter and AMD could easily be a better choice if you are looking for better productivity. At 1080p we won't know until benchmarks come out on the R5's. If they are anything like the R7 line the will not be capable of high FPS like Intel processors, but may still be capable of high enough FPS that it will be fine and still a big improvement on productivity vs. intel. Also, new games that use api like Vulkan in games like DOOM the FPS is so high it doesn't matter which processor you use! So, AMD would be a good pick-up for the extra productivity!HomeworldFound - Thursday, March 16, 2017 - link
It works pretty well from a multi-tasking perspective and gives games like WatchDogs 2 additional stability instead of varied performance.theuglyman0war - Monday, March 20, 2017 - link
I'll definitely stay with Intel if they address the competition with a 6 core canon-lake ( which I believe is the case? ) i7-8700k ( or 9700k if 8th gen is coffee lake ) If they are priced well under $300.Otherwise 4 cores or way better than 2 for the same reason 6 WILL be better than 4 soon! Direct X12 is going to leverage as much to offload more power to the GPU. And 1080p is only a thing till the fidelity of 4k is fast. I can barely stand to look at 1080p anymore! ( amazing how the brain iz wired not to favor perceived loss of quality! )
But when someone hands me a phone with a 1080p screen I instantly notice and find as much revoltingly primitive now that my brains is wired for the crisp sharp legibility of a phone gui with a QHD AMOLED screen's fidelity and jet black inky self emitting diode yummyness.
We are at an awful place of high prices on this display tech. When I consider all my monitors temp solutions. Speed is great. But I am really looking forward to rec2020 combined with Samsung's upcoming QLED self emitting variant ( or OLED ).
In which case..
Speed is not everything for gamers. particularly when the best talent producing graphics still have to compromise on their render targets making the best graphics even look butt ugly sadly.
If Speed was everything no one would be asking if it can run Crysis. We would all be playing CS with the graphics set to "butt ugly". pew pew pew.
Kind of sucks that all this color accuracy, fidelity and contrast ratio is all CURRENT TECH. But I know of no gaming monitor that have a combination of all 3 yet?!!! I easily feel like I am being milked! Speed apparently isn't a thing in TV land ( though consoles still claim there is no thing as PC master race.) But it doesn't matter anyway because the 32" TV market that once ruled! Isn't providing an alternative offering 4k HDR10 WCG OLED 32" TVs...
Which would at least make do till the gamer branded bait monitors sold as much for $2000! :(
Last thing I am worried about is the future of 1080p bangity bangity bangity fps. ( we all can't be the top 5 percent e-sport top dogs )
Hardly representative of anything but a niche concern ( though a valid one! 1080pew pew )
But as someone looking forward to these CPUs IMPROVING the landscape with competition for US INTEL FANS! Sort scratching my head to at some of the negativity thats flying at such a niche concern!
This is going to be the best thing that ever happened to Intel?!!
Sounds like some are trying to sabotage efforts to light a flame under their sleepwalking a55es when we all know they have been cruizen incrementally at an arrogant premium!
Friends don't let friends cruize at arrogant speeds.
Ro_Ja - Thursday, March 16, 2017 - link
"When was the last time you heard someone bragging about a igpu?"My friend actually does, lol. I kinda of agree with my friend though I'm not too convinced that (example) an Iris Pro 580 from some high end Intel chip is better than a GT 750M.
vladx - Thursday, March 16, 2017 - link
In fact they're about equal, GT 750M is only a bit better in Nvidia-optimized games.0ldman79 - Saturday, March 25, 2017 - link
I've got a 960M and an Intel 530.All of the Arkham games aside from Knight can play with the 530 @ 720p at 60fps. Knight won't even load with the 530... lol
Just tinkering to see what they would do. With my default 960M @ 60fps settings the 530 only produces 6fps.
redteam6 - Friday, March 24, 2017 - link
first you dont have to go intel for 1080p. when you and pust over 150 fps with a gtx 1080whats the point if your processor has 5% ipc. If you stream on a i7 there goes your lead.
second 5 to 10 percent is hardly brag worthy. Especially when they had to over clock the i7 to 5ghz to do it. I would rather have 8 more threads then 5 percent better ipc any day. ryzen processors are beast at low prices. With the 1600 at 219usd why in the hell would you ever consider and i7 quad core. period. Done end of fn story.
Morawka - Wednesday, March 15, 2017 - link
not just IPC, Ryzen simply can't clock high.. 4Ghz is considered the top end of overclocking for Ryzen. The new Kabylakes hit 5ghz on air, easily.Until more games multi-thread, the gamers choice of intel is justified. now the for the creative or video editor, ryzen is a steal.
Tchamber - Wednesday, March 15, 2017 - link
Agreed, this looks bad for overclocking Ryzen CPUs. I just received my 1700X and an curious how it will do, but I haven't OC'd since my PIII.UtilityMax - Thursday, March 16, 2017 - link
Give them a break. This is the first iteration of an entirely new CPU architecture. Perhaps it won't overclock so well, and perhaps the compilers won't produce the best code for it, but I think it's pretty clear there exist relatively easily achievable improvements to Ryzen, and we'll probably see them in the next generation.Meteor2 - Thursday, March 16, 2017 - link
I hope so, because they're not getting my money until they do.Tewt - Friday, March 17, 2017 - link
I sincerely doubt you ever had any intention of buying AMD.dysonlu - Thursday, March 16, 2017 - link
For sure. AMD would have been run by imbeciles if this entirely new product line does not have room to grow. This is business 101 practiced industry-wide, especially by storage drives and chips manufacturers.goldstone77 - Saturday, March 18, 2017 - link
Morawka I agree most of the games not running on the new api's like Vulkan will have issues only at 1080p. But running DOOM on AMD vs. Intel doesn't matter you still will have more FPS than you need for a 144HZ monitor, and overclocking will give you no value added. New games it won't be an issue, but older games you could run into low refresh rates if you are using 1080p resolution. Anything higher than 1080p AMD usually scores within 1-2 fps of Intel or beats Intel. For the most part AMD is going to give you better productivity, and about the same gaming experience a Intel unless you use a 1080p monitor, and playing games that don't use the new API's like Vulkan.Duckeenie - Wednesday, March 15, 2017 - link
Narcissistic much? Who knew that it was as simple as AMD buyers are more intelligent that Intel buyers?The fact is you don't really have anything to base your data on because AMD haven't had a competitive chip since probably before you were born.
The only reason AMD keep chuckin' cores at chips is because there performance just doesn't hold up without them. If they could manufacture a competitive chip for less money I guarantee you they would. That is unless all these years they've been bleeding money on purpose.
ShortHand - Thursday, March 16, 2017 - link
1. Ryzen 7 was built to compete with Broadwell in the virtual workflow department. DOLLAR for DOLLAR you are getting 6900k performance for half the cost. (If you got a 1700 and give it a quick overclock to 1800x speeds then nearly 3x the dollar to dollar performance ). So your cost argument is moot.2. The Ryzen 7 Line is more then competent at gaming. Many reviews (You know... Data that thing you requested) show this.
3. Virtual workstations are a much bigger market then gaming when it comes to chips like this. Why would a company or freelance worker cough up the extra 500 for a Broadwell chip ? With near identical or slower performance ?
Meteor2 - Thursday, March 16, 2017 - link
Competent yes, as good as same price or cheaper Intel CPU in terms of average and minimum frame rates? No. According to reviews.Demigod79 - Thursday, March 16, 2017 - link
For gamers who want to live stream their games or do other stuff besides gaming, the Ryzen 7 makes more sense than a Core i7 7700K. I happen to fall into this category - I game on Steam but I also run distributed computing when I'm away and I would gladly sacrifice a bit of gaming performance for twice the compute performance (the competitive aspect of grid computing has me looking for as many threads I can get). Judging a Ryzen 7 solely based on gaming performance is missing half the picture (or half the cores).Meteor2 - Thursday, March 16, 2017 - link
I do DC too... CPU is irrelevant compared to what GPUs can do.kart17wins - Tuesday, March 28, 2017 - link
I do DC also...Some Projects are CPU only.silverblue - Friday, March 17, 2017 - link
AMD wasn't really in the position to offer anything better; you can't just create and market a new chip in a short period of time. So, whilst they weren't "bleeding money on purpose", they still had to make the best of a bad situation whilst they worked out a replacement architecture. Releasing Steamroller- and/or Excavator-based 6- and 8-core CPUs would have qualified as bleeding money; there may have been more gains realised due to the extra cores and L3 cache, but not enough performance to truly compete with Intel - Excavator is more of a mobile architecture than anything else (the progression from Trinity to Excavator is actually very impressive considering the unsuitability of the base architecture towards mobile platforms).Ryzen's per-core performance ranges from decently better than FX (most games, especially older ones) to nipping at Intel's heels (productivity and some newer games), and as such you can't use under-performance as a reason for AMD opting for more cores.
AMD's last truly competitive offering was the Athlon 64 X2 back in 2005 and 2006, pre-Core (Phenom II was respectable but lost more comparisons than it won with Core 2), so with that in mind, we're talking 11 years. As you don't know prisonerX's age... I'll just leave that there.
Meteor2 - Wednesday, March 22, 2017 - link
Still got my Athlon 64 X2 box. Damn fine machine.ddriver - Wednesday, March 15, 2017 - link
I am kinda puzzled by those clocks. With the number of cores dropping, I'd expect to see the TDP budget spent on higher clocks, which is what AMD badly needs to improve its performance in games.Yet the quad core chips have much lower clocks than the octa core. This doesn't make a lot of sense.
Unless those overclock nicely it doesn't seem AMD will have anything to address the high-midrange quads from intel, leaving that market niche without viable offerings. And leaving the 5 series to compete somewhere between the i3 and the lower midrange i5s.
The claimed TDP figures ain't looking too good either. The 1400 is basically half the 1700 at almost the same clocks, yet the TDP is identical. I'd expect it to be at least around 45 watts.
Are those indeed native quad core dies, or just defective 8 core dies with disabled modules? Maybe first tier is salvaged 8 core dies, with native quad core parts picking up later? The hexa cores are most definitely salvaged dies.
prisonerX - Wednesday, March 15, 2017 - link
I think that is notable, and most likely a result of yield management. They're suddenly competing with Intel in performance and process, ie they much more competitive, and they're clearly going to need to use every trick in the book against the massive R and D budget of Intel.Meteor2 - Thursday, March 16, 2017 - link
Unfortunately they don't have access to competitive process nodes (well, unless they contracted Intel to fab the chips). GloFlo '14 nm' FinFet is basically 20 nm with FinFETs, or put another way, Intel's process would be called 10 nm using the same convention. That's where the performance gap is coming from.GF are skipping 10 nm to get to a true 7 nm (i.e. what Intel would also call 7 nm) sooner, hopefully at the same time or even before Intel does.
Ian Cutress - Wednesday, March 15, 2017 - link
The process being used by AMD tops out around 4.2/4.3G for ambient use; the V/F curve just goes mental after 4.1. It's never linear once you go past the sweet spot, even if you have extra power budget to play with.Don't forget not all chips have the same V/F curve anyway. Top end chips are the best of a bunch - they're binned to segments. Take a look at the range of some of Intel's designs.
ddriver - Thursday, March 16, 2017 - link
Well, they should at least target closer to 4 ghz.Looking at intel I see the max frequency rising as the core count drops. We have 4.2 for dual and quad core, 3.6 for hexa, 3.2 for octo and 3 for deca.
fanofanand - Thursday, March 16, 2017 - link
I think what you are identifying is proof of very agressive binning. Their design is slow and wide, and only the best silicon can handle hitting 4+ Ghz.mat9v - Friday, March 17, 2017 - link
Or market separation, remember that those "slower" CPU often overclock very well, as long as they are O"K"lilmoe - Thursday, March 16, 2017 - link
I couldn't care less about higher clocks. This tells me that the process they're using is much more efficient in lower/average-high clocks, which means that mobile parts should be pretty damn power efficient, yet very performant compared to Intel. Can't wait for Raven Ridge, hopefully there won't be any delays.BrokenCrayons - Thursday, March 16, 2017 - link
I really hope you're right. I'm interested in seeing what mobile Ryzen APUs can do for laptops with iGPUs.factual - Friday, March 17, 2017 - link
It's the exact opposite of what u are saying actually. The reason why intel overclocks better is because they have a much more efficient process technology than globalfoundries. Don't excpect AMD to be competitive in the low power market. They'll be far behind intel in terms performance per watt in the low power market at least until globalfoundries developed a more competitive process technology which won't happen anytime soon.ianmills - Thursday, March 16, 2017 - link
My guess is that since most apps cannot use 4+ cores, if AMD had the R5's at the same or higher clocks than the R7's they would be cannibalizing the sales of their R7 lineI think it's just a marketing tactic
ddriver - Thursday, March 16, 2017 - link
Not really, because for the things that zen really makes sense, such as rendering, the octa core will dominate higher clocked quad cores, just like it dominates the 7700k.Higher clock quad core parts will not cannibalize amd's own market, they will win them some of the gaming market. Well, to be honest, if you game at 1440p or more, 8 core ryzen is perfectly acceptable, it only does poorly at lower resolution. So not really the "gaming market" but the "low resolution gaming market".
fanofanand - Thursday, March 16, 2017 - link
All Ryzen's except 7 are salvage dies, Ian addressed this in the article"So when dealing with a four-core or six-core CPU, and the base core design has eight-cores, how does AMD cut them up? It is possible for AMD to offer a 4+0, 3+1 or 2+2 design for its quad-core parts, or 4+2 and 3+3 variants for its hexacore parts, similar to the way that Intel cuts up its integrated graphics for GT1 variants. The downside with this way is that performance might differ between the variants, making it difficult to manage. The upside is that more CPUs with defects can be used.
We have confirmation from AMD that there are no silly games going to be played with Ryzen 5. The six-core parts will be a strict 3+3 combination, while the four-core parts will use 2+2. This will be true across all CPUs, ensuring a consistent performance throughout."
wumpus - Monday, April 3, 2017 - link
Except that in reality, global foundries is unlikely to provide as many salvage dies as the market requires. So a big question is, will the CPU/motherboards allow unlocking, and how likely are you to have an unlockable die?3+3 means that it is quite likely that one of the disabled cores is good, but you might not be able to run the thing in 4+3 mode. Also 2+2 (R5 1400) has less performance than 4+0 with the same cache size (thanks to needing the switching fabric to access the cache). Presumably at some point AMD will ship a R5 1400 that is only half a chip, but who knows how many chips they need to sell to make it worthwhile. Those masks are *expensive* and even Intel only makes a few.
lefty2 - Thursday, March 16, 2017 - link
Yeah, doesn't make sense. Also doesn't make sense that the 4-core models don't use only one CCX. That would make it more efficient as they don't have to communicate through the Infinity Fabric.I think AMD would be able to clock higher if they made a SoC with only 4 cores, rather than a 8 core SoC with 4 cores disabled. Also would be able to sell a lot cheaper, because of smaller die.
FMinus - Friday, March 17, 2017 - link
I guess 4+0 will be the the 1450/x, 1550/x parts down the line where they wont salvage 8 core parts if that ever happens.fanofanand - Friday, March 17, 2017 - link
I think native quads will be paired with an igp for Raven Ridge APUs.mat9v - Friday, March 17, 2017 - link
No, not really, if they were to use 4+0 then they would loose 1/2 cache.silverblue - Saturday, March 18, 2017 - link
Most of the 4-core parts lose half the cache anyway.silverblue - Friday, March 17, 2017 - link
The 1500X clocks 200MHz lower after XFR; it's a bit disappointing but it's not too far behind. As for TDP, that's not power draw but heat dissipation.The 1500X also has access to the full 16MB of L3 cache over its 2+2 core setup. 4MB of L3 per core sounds very tasty so we will get our first look at how having more L3 cache helps performance.
The 1400X is MIA for now - it clocks higher than the 1500X but has 8MB L3. That might be the one for gamers. Releasing it now would probably cannibalise sales of the 1500X.
beginner99 - Thursday, March 16, 2017 - link
Sadly it's a lot more than 5% ST performance. 5% might be the IPC advantage but then Intel easily has 20% higher clocks if you get a 7600k or 7700k. AMDs advanatge here is the unlocked part basically invalidating the entire intel non-k lineup as then intel loses this clock advantage.prisonerX - Thursday, March 16, 2017 - link
Fair enough, but if you're getting 50% more cores for the price, it a no brainer.But them the lame excuse is that some game somewhere can't take full advantage of the extra cores.
Murloc - Thursday, March 16, 2017 - link
it's not a no-brainer if you can't use the extra cores, period. You're just as bad as intel fanboys.ShortHand - Thursday, March 16, 2017 - link
Who is buying a Broadwell chip for a gaming build ? These chips are clearly targeted at the lucrative virtual workflow market. (It doesn't hurt that they are quite competent at gaming as well).Notmyusualid - Wednesday, March 29, 2017 - link
@ ShortHandMOST high-end gaming rigs out there, to tell you the truth.
Here, try searching 3Dmark. I've done this before just to prove a point - I searched the first FIVE HUNDRED benchmarks (choosing only GPU 1080) for TimeSpy, and I saw mostly Broadwell-EP/Haswell-EP all the way. No quad cores of any kind, no dual-channel memory systems either.
The numbers don't lie:
http://www.3dmark.com/search#/?mode=basic&url=...
wumpus - Monday, April 3, 2017 - link
I'd go so far as to say most gaming systems have 8 cores as well, and these are the systems that buy plenty of games at full price: PS4 and Xbox1 consoles.The gaming world has moved on to the 8 core world years ago, don't expect things to slide back.
prisonerX - Thursday, March 16, 2017 - link
The fact that you think games will never make use of the extra cores just illustrates the level of stupidity (and ignorance) that AMD has to counter in order to get a fair shake in the marketplace.MadAd - Thursday, March 16, 2017 - link
I am one of these gamers who play a modern niche MMOFPS game built on DX9 (or they simply wouldnt get 200-300 gamers running around, shooting machine weapons, plus all their vehicles driving and firing in the same hex) that absolutely needs the best single threaded perf I can get AND wont use extra cores unless the engine is majorly revamped.By the sound of you hammering on at that poor guy it makes me feel like youd call me EVIL for still having to choose a 7700K after waiting to see the real world Ryzen benches.
So what would you suggest? Buy a Ryzen anyway incase some year forward my games devs use more cores? Will you try to say its ok to loose at minimum 5-8% SCP (and thats before the added perf Ill get with an overclock bump, something else that ryzen silicon doesnt seem able to do well just yet) just because of the warm fluffy feeling ill get buying AMD? I need to upgrade now, what should I buy and why?
BrokenCrayons - Thursday, March 16, 2017 - link
I don't know you or your gaming habits MadAd, but my knee-jerk reaction is to think about how much longer you're going to play those particular games and whether or not the single threaded performance difference between Kaby Lake and Ryzen will actually have a substantial impact on your gaming experience in both the short and the long term. The bottom line is that people usually move onward to other games eventually.If you plan to build a box you'll use for years to come without major upgrades and you think you'll be switching to other games sooner or later during that system's life, then having more cores might be better in the long run since it really looks like we're standing at the cusp of a big push for more-than-4-core gaming.
On the other hand, if you're like me and you sit on the same games for a very, very long time and you're feeling the single threaded performance pinch right now, it's hard to deny Intel's lead. If you're that sort of person, you may just not get a whole lot out of those extra threads over the life of your next build.
I really think you're the only person that can make a decision for you. And, on the bright side, no matter what processor you end up getting, 7700K or some Ryzen 5 version, you're still getting a very modern CPU. Just find a 4K monitor and the GPU bottleneck you end up with from driving that resolution should hide any CPU shortcomings in either MT or ST performance. :3
MadAd - Thursday, March 16, 2017 - link
> sit on the same games for a very, very long time and you're feeling the single threaded performance pinch right now^^ this. Ive played on most every main FPS franchise since 2000 and for 4 years for me there is no other FPS worth playing than Planetside 2.
Other combined arms FPS stopped at 64 making them small arena shooters in comparison. Planetside 2 is all out warfare but to keep up with 200-300 or more people, with any vehicles they bought, in one location of a much larger continent takes major processing power and its very cpu limited with devs putting more and more load on as months go by yes I am feeling the pinch in say, a frame crushing tower battle, and I hate to have to reduce graphics, potatoside isnt my thing :)
silverblue - Friday, March 17, 2017 - link
When I used to play MMOs, some people were obsessed with making PvP videos. Having fast storage plus the additional CPU cores is only going to help with that.In fact, I remember seeing tests back then focussing on suitability of various CPUs at running multiple applications at the same time, pointing out the strength of triple-core CPUs (i.e. Phenom II X3) over the more traditional dual-core CPUs (Core 2 Duo/Phenom II X2). Don't really see those nowadays.
prisonerX - Thursday, March 16, 2017 - link
Great, you're absolutely right, and you represent approximatley 0% of the gamer market.Congrats.
lmcd - Thursday, March 16, 2017 - link
Great, you're an AMD fan, and you represent <20% of the market even after Ryzen.Congrats ;)
Kutark - Thursday, March 16, 2017 - link
@imcd... *drops mic*prisonerX - Friday, March 17, 2017 - link
Children like yourself think it's "my team versus your team." Why would I care about AMD?What I do care about is that the monopolist corporate criminal Intel is screwing consumers and screwing the industry. But unfortunately the world seems to be full of dullards whose only response to being reamed by Intel is "Please Intel, may I have another?"
Arbie - Saturday, March 18, 2017 - link
@prisonerx - browsing the thread it seems that as soon as you get involved it all degenerates into a flame war. Can't you post without taking things to extremes and calling people names? That wrecks it for the rest of us.prisonerX - Sunday, March 19, 2017 - link
I challenge the pervasive groupthink, I'm sorry that that bothers you.Try opening your mind a little and see what happens.
Meteor2 - Wednesday, March 22, 2017 - link
Arbie isn't wrong.prisonerX - Saturday, April 1, 2017 - link
Wow, so many preciously sensitive flowers here! I'll be sure not to say anything controversial from now on, lest your feeling get hurt Meteor2, I know you find it hard to cope with people who disagree with you.FMinus - Friday, March 17, 2017 - link
Yes because they are revision over revisions, what's Kaby Lake, stepping 9. Ryzen can only get better through time, the difference in gaming others than 1080p is minimal if any, and nobody buys a $500 CPU to game on 1080p, likewise an i7 for 1080p is wasted cash.Meteor2 - Thursday, March 16, 2017 - link
I'd like to see a clear-cut list of consumer workloads (and games, when *not* bottle-necked by the GPU) which noticeably scale well with more than four threads.Unless it's a real good list, Intel wins. Higher IPC, higher clocks, same power consumption.
Beany2013 - Thursday, March 16, 2017 - link
Really, any professional use case. Even if you operate entirely in a web-based CMS (CRM, webmail, Google Apps, Office 365 etc) in Chrome, with each tab grabbing a thread, that helps keep things smooth.Move to fat clients (Outlook, Excel, a browser, a music player in the background) and then have a larger workload to shift to (say, doing some light photo editing) then having more threads available = less pissing about watching Outlook chug it's guts up as you search for that invoice from six months ago.
As for me? Thunderbird, Chrome, Firefox, Opera, half a dozen terminal sessions, and a couple of VMs on the go at any one time that I would like to be able to throw more resources at. Threading is very, very useful for me, and for any serious professional workload (image and light video editing or CAD work where the author cant' justify throwing £2k at a workstation or just plain can't get the budget for it at work, etc).
And is very, very useful for everyone who isn't a gamer. Which is, in short, almost everyone.
Steven R
Dribble - Thursday, March 16, 2017 - link
Not really any professional use. For normal use (office, chrome, excel) an i3 works fine, even with stuff like a music player in the background. If you are an AMD fan and just argued how a Rizen is "good enough" for gaming then the same applies to an i3 and any standard office work.You only need all those cores if you are doing something compute intensive (compiling, running lots of VM's, some image/video editing - although a lot of that uses the gpu not cpu).
Nagorak - Thursday, March 16, 2017 - link
If we're talking "normal use", then it comes down to the fact AMD is cheaper. For normal use we can also throw out overclocking, so it's just stock clocks. In that case you can get a true AMD four core for the same price as an i3.For normal users AMD Is a great buy.
acparker18 - Thursday, March 16, 2017 - link
If you can't spell "Ryzen" your argument is invalid.ShortHand - Thursday, March 16, 2017 - link
FINALLY someone who actually gets it ! This comment section seems to be full of rubes who think 1080p performances on their 144hz over priced gaming monitors is the only metric by which you judge a chip.Meteor2 - Thursday, March 16, 2017 - link
Well this is it. For web and basic office apps an i3 is fine. What scales with more than four threads? Processing big batches of photos in Photoshop? Finishing a video in Premier? (Having never done that, I don't know what that means.) I'm not convinced x264 and x265 scale too well; I've not seen benchmarks of sensible use-cases like transcoding four or five DASH streams or converting a h264 library to h265.VMs I get though, and running compilers, if you're a developer. I'd say those are professional use-cases though where your equipment may well be provided by an employer. I'm more interested in consumer workloads (such as amatuer vloggers).
Achaios - Thursday, March 16, 2017 - link
I3? Naw, man. You can make do with a Pentium 4 from back in 2004. It does web, office and apps just fine.Meteor2 - Thursday, March 16, 2017 - link
Heh, my office PC is a Core 2 Duo. All the uni PCs I see when I visit some labs are too.We're slowly getting dual-core i5 U laptops as replacements; I doubt they are much more powerful.
Beatinstick - Friday, March 17, 2017 - link
Ouch! Most of the PC's at my work are i7's that are maybe around four years old. It's quite overkill, but it's nice to not be bogged down at all, as with my job we use many programs/windows consistently all day. Lots of Outlook, browser tabs(intranet portal), MS access programs for sql database browsing and interaction, CRP system, excel, etc.prisonerX - Thursday, March 16, 2017 - link
I'm guessing you flip burgers for a living. In reality competent programmers query the number of cores, create a proportional number of worker threads (for heavy duty processing only threads, it will usually be 1:1) and feed them tasks. Creating hard coded critical infrastructure is just incompetent. 4 isn't some magic number and neither is 2 or 8.Or let's say that you can't chop up your work into bite size pieces and that there is a strong sequential dependency in your code. In that case a competent programmer would pipeline the work and the question becomes how to allocate work to each pipeline stage. Of course there is no way of exactly dividing the work so that each stage keeps a single core busy, so you just divide the work at natural boundaries where it's most efficient and make sure you have (many) more stages than you have cores such that roughly 2 or more whole stages run in a core. Bigger stages don't scale up or down well in terms of cores. You can't make assumptions about how many cores you have and have your code only be efficient in a certain number. That doesn't work.
The idea that software, including games, is design for some specific number of cores, is a myth.
Kutark - Thursday, March 16, 2017 - link
Competent programmers... you're cute.prisonerX - Friday, March 17, 2017 - link
It's true, I'm gorgeous. So do you think anyone can program or no-one can program?Notmyusualid - Wednesday, March 29, 2017 - link
@ prisonerXLove the response. I boxed pro for a couple of years, and people would 'call' me on it from time to time, usually keyboard warriors, but once in a while in real life also, as I'm not 10ft tall I guess.
And I'd have to ask, well, just because nobody *you* know is a fighter, doesn't mean I can't be, or that nobody else is?!? But I like the way you delt with it, by pushing the question back to them.
prisonerX - Saturday, April 1, 2017 - link
I'm glad you like my work. They'll be plenty more where that came from, so you're in for a treat!As for the rest of your comment, what the **** are you on about? Are you trying to tell me you're some kind of violent moron, and that I'm a coward? OK! LOL!
Does it really bother you that I have a cogent argument and that people who I call out have no comeback? Is that why you're spouting that nonsensical drivel?
I've met more violent people than you'll ever know. Not just run of the mill violent morons, but also the psychotic and violent. And even though I've a pretty large and strong guy, I've never been in a fight with anyone. Why? Because I think violence is really, really stupid and solves nothing and just creates more violence. Also becuase my size gives a lot of people pause, even the mentally ill. I guess that makes me a coward, right? Or does it just mean that I actually have a brain? Something for you to think about.
So yeah, I'm not sure what you're trying to say... that if I was making my points in person you'd punch me in the mouth and then I'd start crying and tell you that you're right? Is that your fantasy? LOL!
vanilla_gorilla - Sunday, April 2, 2017 - link
"And even though I've a pretty large and strong guy"What did you and this "large strong guy" do together? I'm on the edge of my seat.
Nem35 - Thursday, March 16, 2017 - link
Steven, very well explained.The fact that 90% of people who make comments here are gamers is giving different picture on how well the CPU performs.
GO RYZEN!
Tewt - Friday, March 17, 2017 - link
"Unless it's a real good list, Intel wins. Higher IPC, higher clocks, same power consumption."Ok, then. BYE! Not sure what you and others like you are even doing here. All I see is you pretending you are looking for something better. We get it, Intel is better at gaming. I don't dispute that. How about just enjoy your Intel product and piss off. For the other 50%, why are you even engaging people that obviously never had a plan of purchasing AMD?
Meteor2 - Friday, March 17, 2017 - link
I never suggested that you dispute that Intel is better at gaming. I'm geniunely curious as to what workloads people are doing on their home PCs which would benefit from more cores.Tewt - Friday, March 17, 2017 - link
All I see is someone says something about AMD and you generally respond, "no, Intel is better." Ok, fine. You are absolutely sure your choices are Intel work for you and there is nothing AMD currently offers that will fit your use case. There is no reason to even be here. I just went to Intel's last CPU article here, http://www.anandtech.com/show/11083/the-intel-core... and any mention of AMD is marginal. There is no AMD fan constantly berating any Intel achievements but the reverse is not true here. AMD is finally competitive(not the same as saying they are better) and you would think an Intel hornets nest has been kicked.mat9v - Friday, March 17, 2017 - link
They spent so much money on their Intel CPUs so certainly nothing out there can equate it's performance. And if, god forbid, there is some program or use case that benefits more from AMD then nobody is using it and ..... it is not gaming!!! Because YES, 99% of computer users are gamers and if not, then game developers. Rabid App.. I mean Intel fans :)puncs - Sunday, March 19, 2017 - link
Love your comment! 100% true0ldman79 - Monday, March 27, 2017 - link
Home workloads that benefit more cores...Playing games while recording TV while compressing the last thing I recorded while serving media to my kid's tablet.
That's why I bought an FX 6300. Each core wasn't faster than Intel but it's load capacity was higher. 6 is greater than 4.
If you measure by a single metric then my laptop with an i5 6300HQ is better than my FX6300. If you run a dozen apps at the same time the FX performs better.
vanilla_gorilla - Sunday, April 2, 2017 - link
Virtual Machines, that's what I'm buying Ryzen for.For gaming, anyone who likes to record/stream their games, which is a large group and growing extremely fast.
wumpus - Monday, April 3, 2017 - link
Consumer workloads are pretty much Office+everything that runs on a Chromebook. Intel Pentiums 4560/4600 are pretty much the way to go (or you could use AMD chips, they aren't likely to notice the difference). Make sure there is an SSD (and it doesn't have to be very big) and you are done.I don't think ARM and Windows/Office are quite ready to work together, but for the users who aren't heavy Office users, Chromebook works wonders.
But for anandtech readers, I'd recommend any of the Ryzen parts without question (unless they really can only afford the Pentium chips. They are really good for the price).
UtilityMax - Thursday, March 16, 2017 - link
First of all let's get the facts straight. Before ryzen, the AMD CPUs gave you something like 50% of single core performance of an Intel CPU. As a result, a quad core AMD CPU could barely keep up with the dual core i3, but ONLY in heavy multithreaded tasks. In single-core performance, AMD would always fall flat on its face and to be competitive even with a Core i3.Second, why should the gamers even care much about the CPU performance? These days, it's all about GPU. You gotta spend 400-500USD on a GPU before you start realizing that now the CPU is bottle-necking you. Otherwise, the performance of gaming rigs with either AMD or Intel CPU is pretty much similar.
Meteor2 - Thursday, March 16, 2017 - link
Well with excellent $400, $500 and $600 GPUs available from Nvidia and hopefully soon AMD, I'm very interested in where the CPU becomes the limit. I suspect it's i5 level. So for $240, what's going to be better, a faster i5 or a slower R5 with more threads?And given the way GPUs are developing, these questions may well become relevant with just $200 mainstream GPUs in a few years, well within a CPU's lifespan.
pSupaNova - Thursday, March 16, 2017 - link
More threads is where Graphics APi's are going take a look at some Vulkan Benchmarks.mat9v - Friday, March 17, 2017 - link
Nope, in 2 years new games will drag those new GPUs to the ground anyway.Games will be GPU limited for a foreseeable future and only resolution point will climb through 1440p up to 2160p with falling prices.
Only games that require very high fps are bottlenecked by Ryzen cpu and even CS:GO does well enough when core bind to CCX0 or CCX1. MOST gamers play their games with graphic settings tuned for 60-100fps and in that space Ryzen does well enough.
Achaios - Thursday, March 16, 2017 - link
Rome II Total War, Attilla Total War, World of Warcraft, Starcraft II, etc.This is why gam3rz are interested only in IPC, and this is why I, and millions of other gamers, will never buy lame-ass AMD CPU's as long as their single-threaded performance is worse than that of an i5-2500k from 2009.
Hope I made myself clear.
Outlander_04 - Thursday, March 16, 2017 - link
Something is clear, yes .Sadly though most review sites[ and often intel fanboys] ask the wrong questions and then generate dumb answers .
The criticism of Ryzen's gaming potential is 1080p performance. The argument is that 95% of gamers use 1080p or lower resolutions . OK thats fair enough . Also fair is a reasonable speculation that 95% of those 1080p monitors run at 60Hz .
60Hz is the exact same thing as 60 fps . A 60 Hz monitor can NEVER display more than 60 fps . If a graphics card sends a higher frame rate the monitor drops the frames and never displays them.
So the tester logic of using the most powerful graphics card with a cpu and running at 1080p to show a cpu bottleneck is bizarre and stupid . It lacks relevance . Unless the game is playing at less than 60 fps the user is experiencing the exact same thing : 60 fps .
If folks with a 60 hz 1080p monitor buy an intel they get the same gaming experience and worse [ often much worse] encoding and multitasking and streaming
anandreader106 - Thursday, March 16, 2017 - link
+1mdriftmeyer - Thursday, March 16, 2017 - link
Well stated, but sadly most gamers aren't engineering graduates and couldn't grasp the V-sync and frame rate pairing. They couldn't understand the difference between volts and volts room mean squared [Vrms] and how they play into hardware circuitry.If they were engineers they'd spend very little time gaming and most of their life creating solutions.
Kutark - Thursday, March 16, 2017 - link
I feel bad for computer engineers. The create these works of pure genius, efficient, fast, etc. Then it gets handed off to some lazy assed coder who does the bare minimum shit job he/she can that gets it to work.I have a buddy who works in federally funded research that complains about how they just throw more hardware at problems, instead of recoding the software to make the program more efficient.
It's kind of sad that there is that much of a disparity between hardware and software engineers.
prisonerX - Friday, March 17, 2017 - link
Software is much harder than hardware.lmcd - Thursday, March 16, 2017 - link
Average 60FPS at 1080p now and suddenly sub-60FPS on upcoming titles seems even more likely barring DX12/Vulcan actually taking off.mat9v - Friday, March 17, 2017 - link
+2fanofanand - Friday, March 17, 2017 - link
Ryzen st performance has exceeded 2500k IPC so I guess you can now go buy an AMD CPU :)mat9v - Friday, March 17, 2017 - link
Performance of i7-2500K is higher only when overclocked to the max. But sure if you are one in a milion (/s) that play those ancient games, stay on Intel. And please keep away from threads such as this.lmcd - Thursday, March 16, 2017 - link
Try booting up your Linux partition with AMD (or Nvidia, really) on your fancy new kernel install.ibudic1 - Thursday, March 16, 2017 - link
It doesn't get 5% in reality it's worse because:1. Tests where intel winds happen when they turn off cortana so windows doesn't stress extra cores. This is a biased test designed to help Intel whose chip is worse.
2. In the real world no one can see more than 30fps, games where intel "wins" are over 100fps, so another way to bias the test toward intel, which is not up to par.
3. The lowest FPS favors AMD - this is what is annoying in games, when you get a computer to lock up, it won't lock with extra cores and better threading with AMD which is superior.
4. Who buys $320, and then games on 1080 with a crappy graphics card?
It is patently obvious that these sites are paid off by intel, where they try to tune the tests so that Intel will look better when in fact it is not. Anand, how about this. Do a real computer build and test games with everything on? Let's see who's better?
Maybe if you come up with a test that is not biased you'll get more traffic vs competition and other sister websites? Just a thought...
mat9v - Friday, March 17, 2017 - link
1. Nobody test real life situations. If they did, you would have Skype, Discord, maybe Spotify and Twitch, probably many others in a background, maybe also Steam overlay with some webpages open.2. Sorry, in real life even idiots can see 60fps, just open youtube and take a gameplay at 1080p30 and 1080p60. Good players can easily see differences between 60 and 100fps, e-sport fanatics can see (in blind tests) differences up to 150 fps. Fighter craft pilots can see, and recognize the content of, images displayed for 1/200 of a second.
3. More cores, less stalls?
4. The problem is that people does have great GPUs and the better GPU the higher quality CPU is needed.
Don't whine about paid sites, it's unbecoming. Create better tests yourself and interest people in them if you are an expert instead of pointing fingers.
marksteaven11 - Saturday, March 18, 2017 - link
We are so thankful for this.albert89 - Saturday, March 18, 2017 - link
Same crap we are hearing from Gamers Nexus, Linus, Nerdgasm, jays2cents & Gordon from PCworldvideos who have all retracted these claims.Namisecond - Friday, March 24, 2017 - link
What makes you say gamers are continuing to buy Intel? Do you have access to some market data?wumpus - Monday, April 3, 2017 - link
Check GoFlo's capacity. Of course since self-proclaimed gamers are a tiny sliver of Intel's market, GoFlo might be able to supply them all. AMD can't "beat" Intel in sales (and couldn't when they held the upper hand across the board with the Athlon), they just don't have the manufacturing capacity.I'm pretty sure Steam makes this data publicly available. You might have to watch the deltas (which might include retired bulldozer boxes replaced with Ryzen) but it should be close enough.
albert89 - Friday, April 21, 2017 - link
Gamers need that 5% better single threaded performance because 2017 games haven't progressed since 1980 ! Apparently developers haven't heard of multithreading, multicores, 4 channel memory, HSA, DX12 etc.802Shaun - Wednesday, March 15, 2017 - link
CORRECTION: The i5-7500 can turbo up to 3.8, not 4.8. That would be neat, though. :-)Ian Cutress - Wednesday, March 15, 2017 - link
Hah, that's what I get for a half-copy/paste. Updated :)jordanclock - Wednesday, March 15, 2017 - link
The "Ryzen 5 1500X vs Core i5-7500" table has 6/12 for the core count on the 1500X. That should be 4/8, right?Ian Cutress - Wednesday, March 15, 2017 - link
Updated :)t.s - Wednesday, March 15, 2017 - link
In "Comparison: Ryzen 5 1500X vs Core i5-7500" table, wrong core/threads section for 1500X. It should be 4/8, not 6/12.mr_tawan - Wednesday, March 15, 2017 - link
> the Wraith Spire is 65W for high-ambient conditionsDoes 40c qualified as high-ambient conditions ? We are approaching that temp here in Thailand.
prisonerX - Wednesday, March 15, 2017 - link
You should probably install some AC, becuase most consumer computer gear is out of spec over 40c.Kutark - Thursday, March 16, 2017 - link
Yeah i'm sure living in Thailand he has an extra 3-5k USD laying around that he can use to "install some AC".prisonerX - Friday, March 17, 2017 - link
You have a gift for speaking confidently from a position of ignorance.You don't know much about income distribution in Thailand and the 3.8kW split system I installed last summer was $300 before tax.
Nagorak - Thursday, March 16, 2017 - link
Do your temps ever go below 40C there?Notmyusualid - Wednesday, March 29, 2017 - link
@ mr_tawanHello from Phuket. Yes, its high here, my AW18, which usually runs 4x 4.4GHz in the UK/USA, in Thailand is x44 / x43 / x42 / x41, or even x43 / x42 / x40 / x40, or I throttle from time to time.
@ Nagorak
Last year, we had a record drought, longest in 50-something years, and I believe Bangkok hit 42 or 43C. Water trucks were everywhere, visiting resorts and topping up wells / tanks / pools. It was the first and only time I had to call the water truck myself.
If I go out, and return to the condo after a long day, the temp says 44C indoors. The villa is not too bad, due to its high ceilings, but 37C is still typical on return.
MikeMurphy - Wednesday, March 15, 2017 - link
The 7600k looks way outclassed by the 1600x and will have the full 16mb L3(?) cache. Accordingly, seems like it must be a 1700/1800 with defective cores.Also good to finally have some excitement between $100-$200 price point.
iwod - Wednesday, March 15, 2017 - link
Correct me if I am wrong.I remember Intel no longer do much Die Disabling. So Dual Core is not a Quad Core with 2 Core disabled.
Would AMD be in disadvantage when it comes to BOM cost?
There was a recent discussion in LLVM 4.0 release, some developers suggesting they could get another 5 - 10% performance out of Ryzen.
Ian Cutress - Wednesday, March 15, 2017 - link
Probably yes, but by having one silicon design you (a) save money on multiple sets of masks for each design, and (b) make deployment of various SKUs easier as there's only one design to worry about (you just pick the bin you need). AMD probably pays more per wafer than Intel does anyway, given Intel is vertically integrated. Depends on the cost difference of manufacture.Ian Cutress - Wednesday, March 15, 2017 - link
Intel does plenty of die disabling in HEDT and Xeon E5/E7 parts. It's a cornerstone of their strategy there.JasonMZW20 - Wednesday, March 15, 2017 - link
If you get one of these processors, buy 3200MHz DDR4 RAM with Samsung B-die chips.Anyone else notice the 1500X and 1400 L3 cache difference. We know each CCX has 8MB L3 cache, so is 1500X a 2+2 part (2 CCX) and 1400 a 4+0 part (1 CCX)?
phoenix_rizzen - Wednesday, March 15, 2017 - link
That's what I'm wondering. Is the 1400 a dual CCX design, or a single CCX design? And if a single, does it still have dual memory channels?Kakti - Wednesday, March 15, 2017 - link
The choice of 3/3 instead of 4/2 for the six core part makes sense, the use of 2/2 instead of 4/0 on the four core part is pretty interesting though. I wonder just how big the performance penalty is crossing the Core Complex. One thing that using the 2/2 setup does provide is access to both CCX's cache. Whether they provide the full 8mb per CCX on the lower core parts is still unknown, but it'd be pretty unusual for a single core to have 4mb of L3 cache.I remember Intel saying that there was a point of diminishing returns (or at least no further increase in performance) on the size of L4 cache on the Broadwell-C parts. For Skylake they reduced it to 64mb I think. I know L3 and L4 are treated differently, but I wonder if having 8mb of L3 for 2 cores makes sense, or cutting it down a bit actually helps.
Either way, these look like a pretty decent offering by AMD. I really wish they had more IPC or clockspeed overhead to overclock them to the neighborhood Intel has occupied for years. But if you're using lots of threads and don't care about clockspeed or IPC, these are great processors. Very interested to see the eventual market share the 1600x takes compared to the i5 series now. Also interested to see what Intel does core-wise after Coffelake....we have their 6-core roadmap but in two years they're probably going to need to offer a mainstream 8 core / 16 thread processor, even just to keep marketing parity.
*edit actually just found the article, on Anand.
http://www.anandtech.com/show/9582/intel-skylake-m...
Filiprino - Thursday, March 16, 2017 - link
How comes Ryzen has broadwell IPC when AMD claimed they surpassed their goal with 52% over bulldozer (originally 40% over bulldozer which put them at bwell level).Other websites put Ryzen at skylake / kaby lake levels.
vladx - Thursday, March 16, 2017 - link
40% more IPC was Haswell level, not Broadwell.Filiprino - Thursday, March 16, 2017 - link
They marketed against broadwell. Anyhow, +12% puts them at skylake/kaby. 5% bwell->skylake.vladx - Thursday, March 16, 2017 - link
They marketed against Beoadwell only after surpassing initial expectations.vladx - Thursday, March 16, 2017 - link
*Broadwellsilverblue - Friday, March 17, 2017 - link
Excavator, not Bulldozer.beginner99 - Thursday, March 16, 2017 - link
The 1600x and 1400 seem kind of pointless for now. Save $30 and OC the 1600 to 1600x levels. On the other hand the 1400 is too crippled (also less L3 cache unless typo) and hence the $20 more for a 1500x totally worth it.Outlander_04 - Thursday, March 16, 2017 - link
Depends on performance relative to other chips on the market surely .Chaitanya - Thursday, March 16, 2017 - link
Intels bread and butter line of i5 and i3 CPUs need a massive price cut. Hopefully this is where AMD will make most of out their Ryzen lineup.duploxxx - Thursday, March 16, 2017 - link
AMD look at the market!!!! counter in every BIN 3-5-7 an equal core then the most sold counterpart of competitor with high ghz and lower price and there is no intel choice left.... dont push on higher core count and features, done that before and it did not work out that way.missed opportunity to ditch the full competition line up.
GeoffreyA - Thursday, March 16, 2017 - link
I wonder, will newer steppings of the cores ramp clock speeds up a bit?benedict - Thursday, March 16, 2017 - link
I don't understand the price/perf chart. The 1600 has slightly lower clocks but 50% more cores/threads, yet it sits at almost the same level of performance as the 1500X. That's a clear indication that the chart is strongly favoring ST performance.Meteor2 - Thursday, March 16, 2017 - link
I make that four reviews promised at the moment:*Kaby Lake over-clocking
*G4560
*Ryzen 7 part 2
*Ryzen 5
There's always a feeling of not getting things done and out the door before promising the next at Anandtech.
Ian Cutress - Thursday, March 16, 2017 - link
The project list is high, and available time and hands is low. Especially when I cover a wide range of topics solo. The spirit is willing but unfortunately I don't have access to Narnia xkcd.com/821/MattMe - Thursday, March 16, 2017 - link
Personally, I'm loving your work. Keep it up.:)
pSupaNova - Thursday, March 16, 2017 - link
You lot do excellent thorough work, please keep don't go for quantity over quality.Meteor2 - Thursday, March 16, 2017 - link
You do a good job. I wouldn't be here reading this if you didn't :). But remember: under-promise, over-deliver. Not the other way round!prisonerX - Friday, March 17, 2017 - link
They also have a notably whiney readership.JoJo-JCLDJB - Thursday, March 16, 2017 - link
I honstly think the R5 line-up will overclock better.If you look at the 8-core R7 line-up compared to the R5.
1800X 8-core 100% heat with a TDP of 95w
1600X 6-core 75% heat with TDP of 95w
To get the 1600X from 75% heat, overclocked to, say, 4.3Ghz so it reaches the same 100% heat out-put as the 1800X, but keep it's TDP of 95w.
A definitive overclocker, better than the rest.
This is my personal opinion, based on nothing else but logic.
hyno111 - Thursday, March 16, 2017 - link
Based on 1700x/1800x overclocking performance, it is unlikely that many 1600x can reach 4.2Ghz, no matter the mobo/cooler used.JoJo-JCLDJB - Thursday, March 16, 2017 - link
But that's my point exactly, isn't the 1700x/1800x's performances limited by the fact they get too hot too fast, whereas the R5 line-up could potentially be in a better position?I know voltage is factor too, but if there's less heat, technically you could increase the voltage (just that little bit more) resulting in more OC'ing headroom.
See what I mean?
cheshirster - Thursday, March 16, 2017 - link
Ryzen is limited with its 14nm low power process.Voltage needed for 4.1ghz is close to critical 1.475V so max OC will be the same for 1600X and 1400X as it is now for 1800X
Outlander_04 - Thursday, March 16, 2017 - link
I suspect that No Ryzen chip is going to be a great overclocker .Refinements in the gen 2 chips hopefully help , but till then 4 Ghz is probably a practical limit no matter how many cores
sharath.naik - Thursday, March 16, 2017 - link
Am I the only one finding I a bit odd that the 4 core ryzen is 65 watts?. you have mobile intel chips at similar freq that is under 45 watts and has a gpu in it. In short all intel as to do is drop the GPU and replace it with more cores to beat AMD in the core count, while it already has the advantage with the IPC and the frequency department. How long do you think intel will come up with one, atleast one that has a very low performance video out only non gaming GPU.Krysto - Thursday, March 16, 2017 - link
Don't compare mobile chips with desktop ones. I don't think Intel has any desktop quad-core that is at 65W, only dual cores. So the quad-core or even six-core at 65W looks very good.Meteor2 - Thursday, March 16, 2017 - link
i5-7400.mdriftmeyer - Thursday, March 16, 2017 - link
Single-threaded cores.Outlander_04 - Thursday, March 16, 2017 - link
Don't compare TDP with power consumption .And then factor in that intel and AMD consider TDP to be different things
sushukka - Friday, March 17, 2017 - link
AMD hasn't even published their low/ultra-low TDP Ryzen's yet. If their high-end models tops 95W, I'm pretty sure the upcoming low-power models + maybe APUs will be pretty competitive too.JocPro - Thursday, March 16, 2017 - link
Typo: First paragraph should en in "Ryzen 7" instead of "Ryzen 5" :)BubbaJoe TBoneMalone - Thursday, March 16, 2017 - link
Intel is probably panicking wondering how much profits they are going to lose this year and maybe the years to come if AMD keeps this up.cocochanel - Sunday, March 19, 2017 - link
Add to that ARM servers.wumpus - Monday, April 3, 2017 - link
ARM servers aren't likely a real danger, but Intel keeps ECC and similar features on the i3 for that reason. Once you add the cost of the ECC memory, enterprise drives, and similar features, the loss of performance [with ARM] costs more than any Intel Xeon chip. Zen (and especially Naples) is likely to be a different story.IBM is also a current threat. But since anyone with an ARM license and dollar signs in his eyes can make an "ARM server", Intel has to keep the i3 ready as an example of a much more effective "low end server".
Narg - Thursday, March 16, 2017 - link
Nice to see a consumer level CPU with ECC Memory support. Too bad Intel misses out on this market desire. Especially since Google has proven the large number of memory related errors in desktop level PCs.fanofanand - Thursday, March 16, 2017 - link
Why are they bundling a 65/80 watt cooler with a 95w cpu? (1600x)Outlander_04 - Thursday, March 16, 2017 - link
The 1600X ships without a coolerFMinus - Friday, March 17, 2017 - link
afaik, all models ending with an "X" ship without a cooler.fanofanand - Thursday, March 16, 2017 - link
Ian - I have read that the Infinity Fabric speed is based off RAM speed (2:1 I think?) and that faster RAM drastically reduces the latency when switching between each CCX. Others have attempted to test this but I would be far more trusting of your work. Would you please take a look at this as part of your review?wow&wow - Thursday, March 16, 2017 - link
3+3, 2+2 Alignments simply prove that:1) The hero Jim Keller doesn't know the performance impact on using 2 or 1 CCX?
2) None of AMD senior engineer knows the performance impact on using 2 or 1 CCX?
3) None of AMD senior engineering managers knows the performance impact on using 2 or 1 CCX?
4) None of AMD engineering executives knows the performance impact on using 2 or 1 CCX?
Conclusion: AMD engineering executives are just for delivering presentations and holding chips for pictures!
Potentially even worse:
Why 3+3, 2+2? Can't yield on a complete CCX for 4+2, 4+0?
Outlander_04 - Thursday, March 16, 2017 - link
I would conclude that you asking questions that others know the answer to does not make you well informed , nor them misinformed.I could also speculate that 4 + 0 chips are Ryzen 3
wow&wow - Thursday, March 16, 2017 - link
As a chip designer, I have been anticipating 2 CCXs for 8- and 6-core chips but 1 CCX for 4- and 2-core chips, but they should know their design and business/cost better!Outlander_04 - Thursday, March 16, 2017 - link
Once again you miss the point completely .I'm sure they do. Its only you who does not have that information.
wow&wow - Thursday, March 16, 2017 - link
Sure, I don't but dragging people around makes me wonder:Sent reviewer BIOS that had no effect : )
Asked reviewer whether the Windows was newly installed : )
Now,
"AMD now doesn’t necessarily recommend turning off simultaneous multithreading to improve gaming performance, a tweak that the company suggested to reviewers during Ryzen 7’s testing process." : )
wow&wow - Thursday, March 16, 2017 - link
BTW, 4+0 for Ryzen 3 ASP is not to make money but only to show stupidity : )lmcd - Thursday, March 16, 2017 - link
Are you telling me that 4-0 chips are likely?wow&wow - Thursday, March 16, 2017 - link
4+0 for Ryzen 3 ASP is not to make money but only to show stupidity : )Outlander_04 - Friday, March 17, 2017 - link
If you had a chip you could sell as a 4 + 0 , or throw it away , then not selling it would be the stupid part .Keep trying though.
wow&wow - Friday, March 17, 2017 - link
For a 8-core die, no more than one or two could be bad, or you have a shitty process; be reminded the yield issue likely is in SRAM."If you had a chip you could sell as a 4 + 0 , or throw it away , then not selling it would be the stupid part ."
If you had a chip you could sell as a 4 + 0, then having a shitty process would be the stupid part, get it?
Take care of a problem from its source, not wasting resources to take care of (sell) the shits it produces, get it? Have they learn the Bulldozer lesson enough: We give users more physical cores with lower performance, higher power consumption, higher cost : )
wow&wow - Friday, March 17, 2017 - link
BTW, you would be a perfect employee for AMD, what a match in the attitude and mentality : )Outlander_04 - Friday, March 17, 2017 - link
And if the fault in a die is not in the actual cores , but in the interconnect between CCX's?You would not make an ideal employee anywhere. You dont seem to be able to separate your emotive fanboyism from a logical analysis of the possible scenarios
wow&wow - Friday, March 17, 2017 - link
"in the interconnect between CCX's"Why not defect (not fault) possibly everywhere : )
Go and learn the chip engineering first, if you were cable, but if not this life, maybe next life : )
wow&wow - Friday, March 17, 2017 - link
BTW, try to learn and understand statistics first!sushukka - Friday, March 17, 2017 - link
I grant you here and now the grandmaster title of trolling. Congratulations!Outlander_04 - Saturday, March 18, 2017 - link
He is just upset AMD have monstered the intel offerings .acparker18 - Thursday, March 23, 2017 - link
Learn English first, then try to correct people about things you still know absolutely nothing about.sushukka - Thursday, March 16, 2017 - link
This year will be very refreshing in CPU/GPU markets. First Ryzen 7, now after only one month Ryzen 5 and appr. one month of that R3 and Vega announcements. Finally we have some competition and change to the normal tick-tock releases! Also I'm surprised how narrow minded people tend to be. First everybody are whining how expensive Intel and Nvidia offerings are and then when you finally have some competition which could change the situation, people are whining how the solution is lagging a whopping 5% in some test and because of that piss on AMD. You can look Intel's and Nvidia's profits in the past comparing to AMD's and make a conclusion if those two giants are really taking all out of the monopoly they have.Also we have only a small idea how much there are Intel optimized (or designed to fit Intel's architecture) software on the market and without any change there will be near zero possibility in the future to any competitor to show up and undermine that monopoly. It's a near miracle that AMD is performing this well with all new architecture and product lineup against Intel...and comparing the R&D budgets against Intel and Nvidia it very much is.
Personally I'm now waiting the Vega announcement and after that I'll build the "reddiest" pc there is. It probably don't have the ultimate best IPC, but it will have probably the best bang to the buck and will perform more than well in majority of the situations now and in the future. This is maybe little bit sentimental but the extra what I get is the feeling that I have supported AMD and haven't been forced to buy the monopoly crap from Intel and Nvidia like it has been over a decade now.
puncs - Sunday, March 19, 2017 - link
Same thinking, I also own tons of AMD stocks. It's time to give back the red team.cocochanel - Sunday, March 19, 2017 - link
+1.Looking forward to a nice PC for Christmas.
zodiacfml - Friday, March 17, 2017 - link
I hope when you review Ryzen 5, use one mid-range GPU such as the RX480. An Nvidia 1080/Ti would add to the fun.Haawser - Friday, March 17, 2017 - link
Why would anybody do that ? All it would prove is that a CPU like the R5 1500 is perfectly fine for any mid range card. But that would not be in their masters interests, which is to sell as many 7700ks as possible to gullible folks.Outlander_04 - Friday, March 17, 2017 - link
You would do that because an absolute top end graphics card in system connected to a 1080p monitor tells you very littlePsycho tek - Friday, March 17, 2017 - link
People commenting here only thinks of themselves. Like they need high fps their cpu can do the same as Ryzen can do.... jeeezz..Think of it this way.... AMD has given the us a cpu that is more affordable than intel and because of this the third world country can easily afford this type of cpu coz we cannot deny that those people there are creative and more intelligent than people with a pc having high fps and expensive waterloop.
Ryzen is not only for gaming (which is decent by the way), it is also for content creation and other stuff ( hacking maybe... lol). And the success of AMD will come from thise countries.
Just be happy with your intel cpu and let the loyalist enjoy their Ryzen. And do this argument after 4 years about which cpu is still relevant at that time.
Hixbot - Saturday, March 18, 2017 - link
Wow, was really hoping the lower core count i5's would be clocked higher that the 8 core i7s.goldstone77 - Saturday, March 18, 2017 - link
To see an image with no or little movement there is no to very little discernible difference in the smoothness of the image at 60HZ, 120HZ, or 144HZ with solid FPS. It's when you have a lot of quick movement, and a drop or drops in FPS that images become very noticeable at these different frame rates. The refresh rate of the monitor also drops significantly as well when FPS than the monitors refresh rate. 60HZ drops to 30 HZ. This will be extremely noticeable and decrease the perceived smoothness of the image. 144HZ dropping to 72HZ not as noticeable. A video card consistently dropping frames below the monitors refresh rate has in impact on the smoothness of the video. Than there is the difference in color on high fps TN monitors as opposed to IPS monitors. IPS having better picture quality. 60HZ on a 144HZ monitor is going to look better than 60HZ on a 60HZ monitor when it comes to smoothness of fast motion. Move your mouse curser in circles real fast you can see the gaps at 60HZ. at 120HZ or 144HZ the gaps are still there but much less noticeable, because the image is being drawn twice or more a second. I found a good explaination here https://www.rockpapershotgun.com/2015/02/12/best-1...acparker18 - Thursday, March 23, 2017 - link
There is actually a very perceivable difference between 60Hz and 120Hz motion on a monitor. Just like there is a noticeable difference between 30Hz and 60Hz, doubling the amount of images displayed per second is hard to miss. Granted, I doubt there is much difference between 120Hz ans 144Hz, but to say that there is no advantage to higher refresh rate monitors is absurd.albert89 - Saturday, March 18, 2017 - link
I wouldn't mind a RYZEN 5 that's a 4 core (no SMT) with a TDP of 45-50W, OC to 4GHz. And I don't mind paying more for it. Although AMD might convert it to an APU, with HSA, HBM2 etc all thrown into this monster. But this SOC would be more compatible with the mobile market, who knows. Or the APU might come out with a Vega iGPU ???Outlander_04 - Sunday, March 19, 2017 - link
That, I believe, is the planlakedude - Monday, March 20, 2017 - link
It is always amazing to me how rude people are on the interweb. There is no reason for most of it.The bottom line here is that AMD has done an amazing job getting back in the game. The 1600X is an excellent value with more than enough performance for most folks, most of the time. Thing is no one is a "fanboy" for buying or wanting a processor that fits their needs.
Were I still doing HPC and video trans-coding I'd have snatched up a 1600X or perhaps might have even splurged for one of the 8 core models. Intel would not even have been considered for such workloads.
If gaming was still a priority I'd have most likely leaned towards the i7-7700k which is better in lightly threaded apps and cheaper than the Ryzen 7 chips as well. Maybe the i5-7600k if money was an issue.
The point is that you should by the processor that is best matched to your workload/budget. There is no single best CPU for all applications. These new AMD chips are all over the map, crushing it in some benchmarks but lagging behind in others. It should come as no surprise that some people love Ryzen while others remain less impressed.
As it stands I'll buy buying nothing from either company, got too many mouths to feed and honestly 5 year old Ivy Bridge stuff is not that bad.
While I'm babbling... for typical use I think I'd rather have an i3 (or similar) with an SSD, than an i7 or Ryzen 7 with a mechanical HD...
none12345 - Tuesday, March 21, 2017 - link
This is why intel has been annoying the hell out of me for the past 5 years or so. They could have given us an 8 core desktop chip without an igp, for the same price as an i7, and they could have done it 5 years ago.The desktop chips are really just overclocked/volted mobile chips, that they have been milking on the desktop for the last 5 years.
You want more cores you pay 3-6 times as much for the privledge. Which is nearly all profit, considering once you drop the igpu, it doesnt cost anything more to add more cores instead. (granted the 10 core chip costs a bit more to produce, but not twice as much, and certainly not 6 times as much)
msroadkill612 - Monday, April 10, 2017 - link
" It is possible for AMD to offer a 4+0, 3+1 or 2+2 design for its quad-core parts, or 4+2 and 3+3 variants for its hexacore parts, similar to the way that Intel cuts up its integrated graphics for GT1 variants. The downside with this way is that performance might differ between the variants, making it difficult to manage. The upside is that more CPUs with defects can be used.We have confirmation from AMD that there are no silly games going to be played with Ryzen 5. The six-core parts will be a strict 3+3 combination, while the four-core parts will use 2+2. This will be true across all CPUs, ensuring a consistent performance throughout."
Ta for clarifying that.
Could be interestingconsequences. fewer cores = less heat & lots more l3 for each core.
It begs a fascinating question tho.
How will they arrange the raven ridge apu?
the ~a10 apu had about half of the chip space each for the gpu & cpu.
Ir seems safe to assume space will be tight on the coming zen/vega apu.
u would think it may be attractive to use the second l3 cache space for gpu circuitry, in which case we would have 4 core zen using a single l3 cache.