IBM hit 5ghz with its Power6 CPU family several years ago, so AMD first boast is stretching things a bit. They're only the first with a 5ghz consumer oriented part; IBM used the Power6 for it's mainframes and other high end enterprise devices.
But Power was always a family of RISC processors. RISC processors use simpler instructions, but with higher clocks. You can't simply compare clocks and say "IBM hit 5Ghz first".
x86 has broken its complex instructions into microops similar in size to traditional risc instructions. Aside from the initial decode step risc vs cisc hasn't been relevant at the implementation level for many years.
This structural complexity adds processing overboard (CISC to RISC translation) as well as increases the size of the CPU, making higher frequencies harder to achieve.
Many years ago, AMD estimated (as reported by Anand) that the x86 decode was only 10% of the processors cores. Since then, there have been a whole bunch of die shrinks. The x86 overhead probably amounts to less than one percent at this point, so it really isn't all that relevant anymore.
To software developers, the differences between processors and instruction sets are all abstracted away by compilers. The type of endianness is about the only difference developers might care about, and both POWER and ARM are bi-endian (can use either). Operating systems generally use little-endian ARM, meaning almost any code can be compiled for ARM or x86 without any changes.
To the vast majority of developers today, instruction sets are completely irrelevant and interchangeable. Even when developers do have to care about the processor architecture, they still usually don't care about the instruction set.
a 10 ghz cpu? there is a reason that netburst never hit intels 10 ghz estimate. it would have drawn over 400 watts of power, and blown up more than likely. who on earth wants that?
"Many years ago" x86 was 32bit, and only had 8 registers. Now AMD64 is 64bit and has 16, making your argument much more accurate (8 registers *mattered* and held back x86 by a measurable amount. 16 still holds it back, but not all that much).
There is also that bit about two address maximum (you describe source1&destination, source2) in instructions. This limits certain expansions (they got around it for FMAC, but it crops up in other places. I think the last time I heard about it was that intel added scatter gather reads, but couldn't do writes very well due to the architecture.
I suspect that emulating updating the flags with every instruction is far, far, harder than decoding x86 instructions. I can't say I've ever tried to design an x86, though. Trying to make AMD-64 go fast is still far harder than making an architecture built for modern design fast. Trying to sell a fast AMD64 chip is a lot easier than trying to sell a fast but incompatible chip. I also suspect that IBM-POWER is nearly as out of date as AMD64 (its easily 20 years old).
Your point is completely accurate from a developer standpoint. When was the last time you saw someone write assembler, anyway?
The 16 register thing is not that bad, I've used power and know it's assembler, you don't want 32 registers, the write read time on that is just crazy for branching unless you use some of the restrictions on register usage which means your back at 8 anyway. The only chip that did this right was the 760, had shadow registers so it pushed the registers onto an internal stack 16 deep before pushing to ram. Also x86 was 4 16 bit registers you could divide them up but it made life very messy, (I've also done x86 assembler).
No, they haven't broken it down much. Just added only new features with smaller instructions. The original x86 architecture is pretty much still quite CISC in nature. And why Intel/x86 can still take performance crowns in many applications.
If you will not allow it to be said that IBM was first to 5 GHz then I don't think you should allow AMD to say they have "the world’s first commercially available 5 GHz processor.”
You may find you as in a normal person can't buy a 5 GHz power chip so it's not technically commercially available. You can disassemble your mainframe to get one but makes it an expensive chip.
This is ridiculous. If they say "We have the first CPU" they must be compared to other CPUs. If they said "We have the first x86 CPU" or "We have the first AMD CPU" then comparing to IBM Power 6 would indeed be invalid.
CISC or RISC is splitting hairs considering that AMD's claim was "first commercially available 5GHz CPU” with no mention of complex or reduced instruction set architecture. I, too, call shenanigans on AMD.
There was never a need for AMD to "mention" that their product was only to be compared to other x86 chips. Who in the world would actually THINK that AMD would be developing anything BUT a x86?? It goes without saying. To put it plainly...the IBM Power6 RISC chips could be compared to a weed whacker engine...all torque, extremely fast, and absolutely amazing at doing it's one thing...whack weeds with the greatest of speed and efficiency . The 9590 is the workhorse V-8 in a F-150 in comparison. Nobody would ever make any but the vaguest of comparisons to those two.
whoa... have to disagree with you there. RISC is known for being much more efficient than CISC. In fact, modern CISC processors run RISC under the hood and translate their old crusty complex algorithms to RISC-like instructions on the fly.
Except we have things like AVX, SSE, AES-NI, etc. that are basically CISC-like extensions to do certain functions faster. The idea of RISC back in the day was to have a core set of instructions that were extremely simple and build a small and efficient architecture around those. Today, with billions of transistors in modern CPUs, there's little need to be purely RISC, and we now have all sorts of floating point and vector instruction sets available, with more added every generation -- simply because we can. That's why ARM has stuff like THUMB. You can get better code density with CISC and better performance with RISC, and the difference is almost entirely in the decode stage of the pipeline.
Anand talked about this five years ago (http://www.anandtech.com/show/2493/3). AMD's Fred Weber said the cost of x86 decode was 10% of the CPU core (not cache!) in the days of the the original Clawhammer/Sledgehammer parts -- roughly 4-5 million transistors is what I've heard tossed about. With Haswell GT2 at 1.4B and Vishera at 1.2B, we're now talking about less than 2% of the total transistor budget being spent on x86 compatibility (because we have 4/8 cores to worry about, not just one). The amount of transistors being spent on caches that bring diminishing performance gains is ballooning, simply because we have "transistors to spare"!
You are not following me. My point is that all kinds of tricks can be used to jack up the clock speed without actually making any real improvements.
Also, all x86 chips are not created equal in terms of architecture. Pentium 4 presided over an era where things were the opposite of what they were now. P4 clock speed was under-performing compared to AMD chips of the same GHz to the point that AMD had to market their chips by naming them after the P4 equivalent clock speed they out-performed. This was because P4 had an absurdly long pipeline relative to its design and no real scheduling/branch-prediction compared to today.
Again the fact that all they have to talk about here is clock speed makes the whole thing stink.
Could you actually buy a POWER6 CPU on it's own? Wouldn't you have to buy a full solution from IBM to get access to POWER6? I think AMD would have been better to state it was "the first commercially produced 5Ghz processor available to consumers", which someone could spin better than what they wrote.
But companies that are not AMD can buy the part. With the Power 6 it was an IBM only part. Doesn't really matter over all unless the FX 9xxx offers an advantage in price vs performance over the i7 4xxx. It maybe a good solution for people that need single threaded performance over multi threaded but until benchmarks are run who knows?
IBM has hit 5+ GHz with z196 and zEC12 as well, and those implement the (very CISC-y) 360/370/390/zArchitecture instruction sets. I would also note that in a modern Tomasulo-based OoO processor the distinction between CISC and RISC comes down to a few extra decode stages. The OoO backend ends up looking pretty similar either way.
AMD's specific marketing claim is "first commercially available 5 GHz CPU", so they appear to be drawing a distinction based on the fact that IBM doesn't sell POWER or zSeries processors individually or through retail channels. I imagine they had some fun negotiating that one with the legal department...
In context it's clear that the press release is factually incorrect, and that you're trolling. That "weird" RISC architecture is a very well known big iron mainframe CPU, which quite a few people cares about. Even more people care about companies not lying about their accomplishments.
"Nobody cares", "some weird RISC architecture". Yeah you're right, the world of computing just doesn't exist outside of the narrow realm of Intel processors released since 2010 and used by you and your teenage friends.
It's not the same. They are not the same class of products. They do have a similar function, but couldn't be more different under the hood.
It's like saying that Honda couldn't claim they had a naturaly aspirated engine with the highest HP 'density' for a while (2000ccm 240 PS iirc, but you can take any NA engine of the time) just because Mazda had its Wankel process engine in the RX-8 that only had 1.3L but also had 250 PS. It's just not the same thing, although they are both car engines, just as RISC and CISC CPUs are both CPUs.
No, let's be clear here. When most people hear "CPU" what they think of is "x86 CPU". And this is indeed the first commercially available 5 GHz x86 CPU.
Bringing other architectures into the picture is not valid. But I'd agree to the extent that "CPU" should be explicitly qualified as "x86 CPU" in this case.
Now that we know this is a 220W part (!), it's pretty clear that it's not for the mainstream market since it requires pretty heavy cooling. Seems to me that this was mostly a PR stunt -- "Hey, we're the first to 5GHz". Not much different from overclocking a standard CPU using liquid nitrogen or some other exotic cooling. Seems a waste of engineering resources to bring this out
You would do very badly in court. "But your honor! Its clear that they MEANT something else even if they SAID this". Who gives a fuck what "most people" think of when they hear "CPU"? Fact > Fiction.
Ah, but you're missing the point: FX-9590 has a maximum Turbo Core speed of 5.0GHz, which is 800MHz higher than FX-8350's max Turbo, but what's the base clock? Richland A10-6800K and Trinity A10-5800K both have the same 100W TDP, but Richland can clock 200-300MHz higher and it also has a higher GPU clock. With process refinements and tuning, we might have a base clock of 4.2 or 4.4GHz and the maximum 5.0GHz will only be hit in single-threaded loads, which would potentially be possible while remaining in the 125W TDP. Worst-case, I'd expect TDP to be 135W. We'll see what AMD has to say though....
If they bump the TDP past 150W, then Bulldozer and its derivatives truly will be AMD's NetBurst. What amazes me is that Intel actually had the guts to can Tejas. Anand actually had a chance to run some early benchmarks on Tejas at one point, so it was basically complete and Intel realized that the part just wasn't going to be any good. I think the only way AMD fixes the Bulldozer architecture is if they seriously reorganize the pipeline at some point and stop going so deep, but I don't see that happening.
I think it was an easy decision for Intel to make on Tejas in the end. The power numbers came in real high at the high frequency and they realized they could achieve better real performance at lower power and lower frequencies with a dual core. This became a "right hand turn" for them and setup the direction for all future architectures
Really? I looked over all the Tejas stuff but it never appeared to get to the benchmark stage, unless he withheld them? In which case, even now that it doesn't matter at all, I would still absolutely love to see the benchmarks, surely Intel can't care now.
I know Anand had a chance at some point to play with early silicon. I don't know if he still has any numbers, or maybe they just let him use it at a lab but no benches? I'll have to ask him....
What I was referring to you won't find outside of Intel because Tejas never got that far out of the gate. It was clear during the design phase in Austin it would take too much power to get to the single core MHz target they wanted. Dual core beat it.
Why don't you think that AMD is going to reduce the pipeline length? I was under the impression that there were already some steps being made in that direction with Steamroller. The original Zambezi Bulldozer release was such a fiasco that it got AMD's management team canned, and the new management openly admits it was a failure. Stuff like this 5 GHz chip is clearly just a stopgap solution; in the medium term (next couple of years) they either need to fix their architecture so single-thread IPC is at least at Sandy Bridge standards, or they need to just ditch the construction equipment line of cores altogether and return to a K10 derivative (just as Intel ditched NetBurst for the P6-derived line of CPUs that continues to this day).
I doubt that AMD can get SB level single thread performance with Steamroller. These cores are much smaller than Intel's; the real performance boost should be in multithreaded scenarios, though it shouldn't be difficult for Steamroller to, well, steamroll previous FX processors in even light workloads. Nehalem single threaded performance would be nice.
If AMD can stay afloat for a few more years (and they've managed to do so under worse conditions than now so I don't see why they won't), they'll close the performance gap with Intel.
Haswell has demonstrated that x86 is asymptotically approaching "the fastest per-core that it will ever be". The writing was already on the wall with Ivy Bridge, and now Haswell has put the definitive stamp on the issue.
It's simply not cost effective for Intel to pump the ever increasing R & D dollars, not to mention transistor count, necessary to advance x86 IPC significantly. The x86 market is not growing relative to other chip markets anymore thanks to mobile devices, and "more than good enough for 99% of uses" was already achieved by x86 some time ago. The just aren't that many dollars chasing higher x86 performance anymore and Intel can't continue to spend what is necessary to advance x86 speeds on a reduced performance dollar budget.
Given that, AMD now has time to play catch up. Intel is no longer going to be making each of its successive chip architectures significantly faster than the previous, so AMD is no longer chasing a moving performance target. Because AMD has not hit the same walls that Intel has hit in terms of IPC improvements yet, it has room to advance its performance more quickly than Intel does - until it hits the same walls in a couple of years.
At that point, AMD and Intel will have fairly similarly performing parts from an IPC perspective. Intel will still have superior process technology which will allow it to have a better high end story, although not nearly as much better as it has now. Intel will also have better thermals and power use due to superior process technology as well as generally better designs.
Even though AMD will eventually reach near parity with Intel on performance, it will not benefit from it the way Intel has. Because by that point, the x86 market will be even further cannibalized by mobile devices and AMD will be fighting for an ever larger piece of an ever smaller revenue stream. Intel already milked x86 for the best of what it was worth, from a profit perspective, over the past few years, and by the time AMD is in a position to significantly challenge Intel, the milking will be over. AMD will continue to do just well enough to stay afloat but will never make boatloads of money off of x86 like Intel has.
Considering that overclocked piledriver uses 268 watts (from the source i heard) and everything i have seen point to a max 220W power usage it is amazing.
and they could get it down to 150 (maybe) if the parts are similar to the Richland xxx5/xxx7 parts... i don't know :(
And my understanding is that Steamroller is a deep change in the bulldozer uarch.... :)
Also, Considering that netburst's power consumption is more from leakage (and i believe most of Bulldozer's is not) the issue is a lot more of the silicon it is made on....
on another note i met someone who apparently has run their 8350 at 1.7 volts since they got it (right when it came out) 0.0.... This scares me because AMD has been really conservative with voltages on Llano and Trinity.... If they are not conservative i think 150W might be possible for the part that turbos to 4.7 :)
The rumors have been saying 220W TDP. That sounds outrageous, but the rumors about "Centurion" have turned out to be right so far. That would explain why they are starting off by selling to system integrators only - if they sold a 220W chip through normal retail channels, it would result in a lot of fried motherboards (and maybe even house fires).
Just for reference, because it's somewhat interesting: the highest ever TDP on a retail x86 CPU to date belongs to AMD's 140W Phenom II X4 965 BE. If we widen the search a bit, I don't know about RISC CPUs, but Itanium 2 had a maximum TDP of 150W, and that was an absolutely MASSIVE chip. I could see AMD going for 140W again (likely because they need the power to hit the clocks they're talking about), but more than that would be seriously crazy train.
What's scary is the more I think about it, the more the "system integrator" stuff makes me think there might be credence to some of the 200W or higher rumors. It would also explain the bump to FX-9000 series ("9000 is for high power parts"), at least in part.
I really hope AMD doesn't try to go with a >150W part, and more than that I really hope most consumers are wise enough to not buy one, especially if they launch at significantly higher prices which is what I'm now hearing. Could you imagine $500 for the FX-9590? More than any Haswell or Ivy Bridge part, and while I suppose it would be competing against the SNB-E parts, I'm not sure what benchmarks it would win, even at 5GHz. Guess we'll know in the next couple of months.
Is it possible this is a die shrink? Like Apple moved their A5 from 45nm to 32nm quietly, could AMD have moved to 28nm without mentioning? After all, Steamroller is supposed to be 28nm GloFo which hasn't been used before.
125 W should be enough for single threaded turbo, but loading all cores requires significantly more juice. That Trinity / Richland example is actually quite nice in this regard: they need 100 W even for 2 modules at 4+ GHz. Well, they can stay within reasonable TDPs if they just keep the base clock low enough...
It'll come down to real performance metrics, price and power. AMD led the charge away from MHz years ago with a focus on actual performance, not MHz. Strange that they're pushing MHz again now.
Higher frequency isn't a bad thing. What's bad is killing your cache sizes and having ridiculously long pipelines to do it (pentium 4). This new cpu looks to be a new revision of piledriver that can just flat out run faster. I'm thinking it's somewhat like the old days where you see a cpu double in frequency from initial release to final stepping (P3 coppermine). I doubt AMD went over 125W since if they did absolutely everyone will blast them, just like intel was back in the late P4 days. Trying to market a 125W vs a 84W is bad enough, but say 150W vs 84W is getting silly.
I see this cpu actually beating intel in some multithreaded applications. Of course 125W is a lot higher than 84W, and if you overclock the haswell up to 125W...
Hopefully steamroller will narrow the IPC gap enough that the higher clock speeds are enough to catch intel.
djs, if you're like me, where I want a 60% performance gain as a bare minimum vs my current chip, and not for $400+, but more like $200-, then there is more waiting to do (though in my case, I'm only waiting on price at this point).
i believe it is more like a few percent... maybe..... XD ;)
but the gains from windows to linux for BD uarch is crazy..... well, if both platforms are optimized, i have not seen anything with vanilla linux (no optimizations by user), that would be interesting!
The performance gains come from using software that is compiled with a non-biased and optimized compiler and a kernel scheduler, that actually is aware of the architecture of the cpu.
I think 200W TDP is unrealistic, I doubt there was ever a stock consumer CPU with such rating and it seems unlikely any manufacturer would try to push such parts as even "enthusiast" class. 140W may be a realistic target, or why not 125W even, as it was the case with Phenom II X6 series. They were able to slip in the same power envelope as older X4 parts, despite having two more cores.
I imagine GF's 45nm process was significantly mature at the time X6 was due for them to make some decent power savings, however they were generally clocked below the X4 series.
I am skeptical about the 125W TDP, but if that's the case, the performance gain from a base clock increase isn't that bad. I would probably invest in that. I don't have much hope for increased game performance. However, I think if the next gen consoles start bringing multi core friendly titles to the pc, AMD's parts might have more longevity and accuracy than they're given credit for.
As far as gaming goes, Ian has already shown that the vast majority of games are GPU limited at high quality settings until you get more than two GPUs -- provided you have at least an A10-5800K or FX-8350, your CPU is "fast enough" for nearly any game we're likely to see in the next couple of years.
That's only true for a certain subset of games, and generally those games are console ports (IE, simple CPU stuff, and 'high' settings means "just add more textures and launch it on PC!").
Pretty much any grand scale PC game (like MMOs or RTS games) are heavily CPU bound, and in the vast majority of cases where CPU matters AMD falls flat on their face.
Though console ports will still probably run well, considering the CPU's in both the new consoles are low powered net-book style CPUs. So I'm sure there will be enough games for everyone to play.
I can only hope both the server and client for Planetary Annihilation have benchmarking tools, that would be great.
It all depends on the games. You can't conclude that the vast majority of games are GPU limited unless you actually test a large number of modern games. Ian only tested 4. There are a large number of games out there, often PC only multiplayer games, that are massively CPU bound. Check out this one : http://www.bit-tech.net/hardware/cpus/2012/05/01/i... In Shogun 2 an overclocked i5-3570k gets triple the minimum FPS of a stock Phenom II X6 1100T (36FPS vs 12FPS!) The gains in Arma II are large too at 80% higher minimum FPS.
By the same token, an overclocked i5-3570K is twice as fast as an i7-3930K in Shogun 2 according to the graph you linked to. We know the 3570K doesn't stand a chance in multi-threaded anything against the 3930K. This clearly suggests this game is single threaded because the OC 3570 is also faster than the OC 3770K with the results you presented. To suggest they are CPU bound is disingenuous if they're only using 1 core. Moreover, the memory controller also features prominently in some games and this needs to be addressed as well.
Jarred, For now, the Next Generation CPU core also known as Steamroller will only find its way into APUs.
AMD has not given(yet) any indication as to when the Piledriver based FX MPUs will transition to Steamroller based FX series. So you might need to tweak the below statement like AMD tweaked the FX series ;-)
<quote> Bulldozer was 1st Generation FX-series, Piledriver is 2nd Generation FX-series, and ahead we still have Steamroller (3rd Generation FX-series) and Excavator (4th Generation FX-series), but they’ve chosen a different route. </quote>
But tell me this. Is the base clock the same 4.0 GHz for the FX-9590 ?? If 5GHz is turbo it is certain that certain cores are turned off guarenteeing that. Apparently AMD has been hiring marketing BS people from Samsung (of the Exynos "Octa" fame) while Samsung has been poaching AMD's engineering talent. ;-)
I took out the FX-series reference for you, since you're correct. As for the base clock, it remains (as yet) undisclosed, probably for good reason. And I'm not sure if AMD poached Samsung's marketing people or if they just hired the same marketing group that handled Intel's NetBurst clock speed frenzy. :-)
Ha. If those were true, there'd be no point whatsoever for the 9590. Steamroller would obliterate it in nine or so months from now.
If the 9590 had a basic clock of 4 to 4.2GHz, I could very much believe a TDP of 140W max, but no lower as turbo is per module as opposed to per core. However, from the various rumours floating about, there's little chance of a base clock that "low", meaning some significant power draw at load. Would this halo product actually use a proper implementation of CnQ in order to keep clocks and power down at idle?
Overclocking any CPU can dramatically raise power consumption; I imagine a 5GHz i7-2600K would suck down some power, but anything thereafter would drop that a fair bit.
If those benchmarks are anywhere near true, the next thing we need to know is prices. Will it be on par with i7 4770k?
it always funny to see the critics made about AMD and their 5Ghz CPUs. Even if AMD beats Intel performance-wise, they will always be people saying 'it does but consumption is that much higher'. Basically Intel is just changing the rules so that they always win. And Intel is how many times bigger that AMD? it's hard to compete when you have far less money...
No one's changing the rules. AMD had better products than Intel during the NetBurst era because they had better IPC and lower power consumption, even if the top-of-the-line Intel CPU sometimes beat AMD's in raw performance. In late 2003, Intel announced a modified Xeon CPU as the Pentium 4 Extreme Edition, which narrowly defeated the Athlon 64 in many benchmarks, but cost $999 and was widely panned by reviewers. Do a Google search for 'Pentium Emergency Edition' to see what people were saying about it at the time. This 5 GHz Vishera is basically the same thing coming from AMD: a stopgap solution which has high clock speed but low IPC, and terrible power efficiency and thermals. Intel only broke out of this pattern when they updated their core to Conroe (Core 2 Duo) and dropped their focus on high clock speed and long pipelines. AMD will have to do the same if they intend to compete. I know their focus isn't as much on the desktop as it used to be, but the construction equipment series is very ill-suited for portable devices, so it doesn't even make sense from that perspective. The only place the high-core, low-IPC, weak-FPU parts might have any advantage at all is in the server market - and they have not been very successful even there.
Well, at least AMD is looking at the non-APU performance CPU segment. Even if this isn't the offering much of anyone wanted. It's nice to see AMD do some token competition.
Ah for the days of Thunderbird... when being a rebel meant a faster and cheaper CPU. Now being a rebel just means you are on a budget and have to make due.
For goodness sake Intel proved in spectacular fashion with the P4 architecture that clock speeds don't mean jack. Core2, Nehalem and everything that has followed have showed that architecture can be everything. Yet here we are...
AMD resembles the USSR more every year:
AMD: "You do something... we can do bigger!" INTEL: "but... um... the point was to make it small..." AMD: "I say point is make big! Strong like bull! Smart like tractor! High clock speed like Bulldozer!!!"
Let me just whip out my 1.21 jiggawatt processor I've got here...
The TDP of this thing is insane (in a bad way). Ludicrously high TDPs to get minor clockspeed bumps (relative to power investment) smack of marketing BS.
This is like Intel racing Amd with Intel being a few car lengths ahead but there is a dead end up the road. Play catchup AMD while you can before that dead end ends up to be a new access road and intel has more road to be ahead of you.
Over the years, Intel usually has had the advantage of being a process node (or 2 now) ahead of AMD. That makes it more difficult for AMD to compete in either performance or power. I give AMD credit for doing as well as they have until the last few years. Now, not having their own fab makes it more difficult still
Hey, they're still only one node ahead: 32nm to 22nm was Intel's last shrink. Only problem is Intel is due to roll out 14nm around the time AMD goes to 28nm, so AMD is only getting a half-node the next time. Of course, AMD (and GF) are behind on the 28nm transition already.
I think they're relying on the high density libraries to bring down Excavator's power usage in line with another die shrink. It would help immeasurably if they brought that particular idea forward to Steamroller, really.
Global Foundries can be blamed for some of AMD's woes, but not all. Intel manufactured Sandy Bridge on 32nm; if AMD had anything that good, I think there would be a lot less dissatisfaction with their offerings. The problem is that Bulldozer/Piledriver just has atrociously low IPC and power efficiency compared to Intel's last couple of generations.
The funny part is that Intel could never afford to have AMD go under. The industry has been free of anti-trust lawsuits mainly because Intel busts out their "gimp" (AMD) as a counter-example.
The cool thing about that though is that AMD will always have the resources to make use of chip design talent (especially something revolutionary). After all Intel's Core2 chips came out of nowhere and caught AMD flat-footed.
As much as we complain about it, business-wise AMD relies heavily on the APPEARANCE of being neck and neck with Intel. Intel can generally manage to outdo them in almost every way but I wonder whether they deliberately allow these sorts of moments to happen so that AMD does not get marginalized completely and put out of business.
At 220W, I doubt the OEM solutions are air-cooled! Understandable that these are going to system integrators only --- that is just nuts! I'm thinking the only possibilities here for who is buying them is Alienware and VoodooPC.
Maybe they integrated a crazy strong GPU after their experience designing the SOC for both the PS4 and XboxOne? If OEM's build a proper box with closed loop cooling in it, it could enable some very interesting designs which would be harder if the heat was coming from two chips.
Since the new console announcements I've actually been hoping to see some way to convert them back to 'normal pc' usage because they would make awesome cheap gaming rigs. And historically seen, all consoles ever released have had some alternative way of using them. ;)
The FX series are plain CPUs with no integrated graphics. What you're looking for is Kaveri, which is coming later this year and is supposed to integrate a reasonably good GPU and have a homogenous memory architecture similar to what is on the new consoles.
"So we're basically looking at a 76% increase in TDP relative to the FX-8350 to get a 19% increase in maximum clock speed. It's difficult to imagine the target market for such a chip"
Well... there's always those who think it'll overclock better than previous chips, but I can't imagine that being a particularly large market. Kind of leaves a bitter taste.
I wonder what amount of power AMD expects this to use on average.
well, the 125 watt 8350 uses about 213 watts under maximum load with turbo enabled. thats a....58 percent increase. if the same applies for the 9000 series, that would be...347.6 watts.
Wouldn't a race involve more than one competitor? Intel gave up on the GHz race when they decided to end Netburst at 3.8 GHz, and go dual-core (then Core-architecture) instead.
AMD's FX-9000 series 220W power demonstrates exactly why Intel (and AMD) had stopped focusing on MHz originally and started building multi-core with higher total performance and lower total power.
I think I have a couple delta black label 60mm fans sitting around and a couple Alpha PAL heatsinks. Probably could mod them to fit. Or I could get out my full-on double panaflo (metal) fan watercooling setup with Chevy heatercore.
Give me a break AMD, why even release this? Realistically, releasing this chip is equivalent to AMD waving a white flag. The market has shifted to efficiency ACROSS THE BOARD, mobile devices, servers, HTPC, desktop you name it. A new GHZ race? Please, consumers won't fall for it this time.
I doubt the 220w is really TDP but more likely total power consumed, despite what some may say. If you OC an FX-8350 to 5.0 GHz. and measure the total consumed power it's closer to 165w so that is what the actual TDP is likely to be for comparison to the 125w TDP rating of the stock FX-8350. The claimed 220w TDP may be to justify not selling these CPUs to the retail segment?
Um, no. Not at all. If you overclock an FX-8350 to 5.0 GHz and do nothing with it and measure idle power, yes, it's probably around 165W. As soon as you put any sort of load on it, it will jump up to 300W or more. TDP is about the maximum power a chip will use under load, not about what it will use in a light workload. Surfing the web, it's probably going to consume 40-50W; playing a game or encoding a video, it will probably be very close to that 220W.
I would bet they'll eventually sell the FX-9000 parts in the retail market as well (though potentially only as OEM chips rather than boxed chips). They would need a good motherboard and cooler though, which AMD can't necessarily guarantee on the motherboard side, so we'll see.
Para mim tdp tao alto não me preocupa tenho um bom cooler a ar , espero que venha novas placas mãe para ele , aqui no Brasil falta placa mãe boa para amd ! e não tem um apoio grade em varejo de peças mesmo assim optei pela AMD e sempre continuarei utilizar seus produtos .
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
115 Comments
Back to Article
DanNeely - Tuesday, June 11, 2013 - link
IBM hit 5ghz with its Power6 CPU family several years ago, so AMD first boast is stretching things a bit. They're only the first with a 5ghz consumer oriented part; IBM used the Power6 for it's mainframes and other high end enterprise devices.http://en.wikipedia.org/wiki/POWER6
Kalki70 - Tuesday, June 11, 2013 - link
But Power was always a family of RISC processors. RISC processors use simpler instructions, but with higher clocks.You can't simply compare clocks and say "IBM hit 5Ghz first".
DanNeely - Tuesday, June 11, 2013 - link
x86 has broken its complex instructions into microops similar in size to traditional risc instructions. Aside from the initial decode step risc vs cisc hasn't been relevant at the implementation level for many years.DERSS - Wednesday, June 12, 2013 - link
This structural complexity adds processing overboard (CISC to RISC translation) as well as increases the size of the CPU, making higher frequencies harder to achieve.Guspaz - Friday, June 14, 2013 - link
Many years ago, AMD estimated (as reported by Anand) that the x86 decode was only 10% of the processors cores. Since then, there have been a whole bunch of die shrinks. The x86 overhead probably amounts to less than one percent at this point, so it really isn't all that relevant anymore.To software developers, the differences between processors and instruction sets are all abstracted away by compilers. The type of endianness is about the only difference developers might care about, and both POWER and ARM are bi-endian (can use either). Operating systems generally use little-endian ARM, meaning almost any code can be compiled for ARM or x86 without any changes.
To the vast majority of developers today, instruction sets are completely irrelevant and interchangeable. Even when developers do have to care about the processor architecture, they still usually don't care about the instruction set.
dsumanik - Sunday, June 16, 2013 - link
Bottom line, I dont care if its inefficient, slower, or the 220W TDP requires phase change.Pretty cool to have a stock 5 GHZ CPU in your system!!!
Tired of Intel monopoloy selling is the same thing with a different sticker for the last 3 years.
Like haswell was supposed to be a revolution - In graphics AND IPC
yawn central.
AMD still has better IGP, and power barely dropped, in soime cases increased.
I mean its 2013 wheres my 10ghz CPU?
Kudos to AMD!!!
TheinsanegamerN - Sunday, June 30, 2013 - link
a 10 ghz cpu? there is a reason that netburst never hit intels 10 ghz estimate. it would have drawn over 400 watts of power, and blown up more than likely. who on earth wants that?Cptn_Slo - Monday, July 8, 2013 - link
AMD Does not have better IGP, Intel Iris Pro is easily twice as good with half the power consumption.AMD will not get better until they move to a thinner fab
wumpus - Monday, June 24, 2013 - link
"Many years ago" x86 was 32bit, and only had 8 registers. Now AMD64 is 64bit and has 16, making your argument much more accurate (8 registers *mattered* and held back x86 by a measurable amount. 16 still holds it back, but not all that much).There is also that bit about two address maximum (you describe source1&destination, source2) in instructions. This limits certain expansions (they got around it for FMAC, but it crops up in other places. I think the last time I heard about it was that intel added scatter gather reads, but couldn't do writes very well due to the architecture.
I suspect that emulating updating the flags with every instruction is far, far, harder than decoding x86 instructions. I can't say I've ever tried to design an x86, though. Trying to make AMD-64 go fast is still far harder than making an architecture built for modern design fast. Trying to sell a fast AMD64 chip is a lot easier than trying to sell a fast but incompatible chip. I also suspect that IBM-POWER is nearly as out of date as AMD64 (its easily 20 years old).
Your point is completely accurate from a developer standpoint. When was the last time you saw someone write assembler, anyway?
xvart - Friday, July 5, 2013 - link
The 16 register thing is not that bad, I've used power and know it's assembler, you don't want 32 registers, the write read time on that is just crazy for branching unless you use some of the restrictions on register usage which means your back at 8 anyway. The only chip that did this right was the 760, had shadow registers so it pushed the registers onto an internal stack 16 deep before pushing to ram. Also x86 was 4 16 bit registers you could divide them up but it made life very messy, (I've also done x86 assembler).scottwilkins - Wednesday, June 12, 2013 - link
No, they haven't broken it down much. Just added only new features with smaller instructions. The original x86 architecture is pretty much still quite CISC in nature. And why Intel/x86 can still take performance crowns in many applications.Zink - Tuesday, June 11, 2013 - link
If you will not allow it to be said that IBM was first to 5 GHz then I don't think you should allow AMD to say they have "the world’s first commercially available 5 GHz processor.”shadab47 - Thursday, June 20, 2013 - link
rightxvart - Friday, July 5, 2013 - link
You may find you as in a normal person can't buy a 5 GHz power chip so it's not technically commercially available. You can disassemble your mainframe to get one but makes it an expensive chip.MrSpadge - Wednesday, June 12, 2013 - link
This is ridiculous. If they say "We have the first CPU" they must be compared to other CPUs. If they said "We have the first x86 CPU" or "We have the first AMD CPU" then comparing to IBM Power 6 would indeed be invalid.woogitboogity - Wednesday, June 12, 2013 - link
CISC or RISC is splitting hairs considering that AMD's claim was "first commercially available 5GHz CPU” with no mention of complex or reduced instruction set architecture. I, too, call shenanigans on AMD.Howlingmoon - Wednesday, June 12, 2013 - link
There was never a need for AMD to "mention" that their product was only to be compared to other x86 chips. Who in the world would actually THINK that AMD would be developing anything BUT a x86?? It goes without saying. To put it plainly...the IBM Power6 RISC chips could be compared to a weed whacker engine...all torque, extremely fast, and absolutely amazing at doing it's one thing...whack weeds with the greatest of speed and efficiency . The 9590 is the workhorse V-8 in a F-150 in comparison. Nobody would ever make any but the vaguest of comparisons to those two.danrien - Thursday, June 13, 2013 - link
whoa... have to disagree with you there. RISC is known for being much more efficient than CISC. In fact, modern CISC processors run RISC under the hood and translate their old crusty complex algorithms to RISC-like instructions on the fly.JarredWalton - Thursday, June 13, 2013 - link
Except we have things like AVX, SSE, AES-NI, etc. that are basically CISC-like extensions to do certain functions faster. The idea of RISC back in the day was to have a core set of instructions that were extremely simple and build a small and efficient architecture around those. Today, with billions of transistors in modern CPUs, there's little need to be purely RISC, and we now have all sorts of floating point and vector instruction sets available, with more added every generation -- simply because we can. That's why ARM has stuff like THUMB. You can get better code density with CISC and better performance with RISC, and the difference is almost entirely in the decode stage of the pipeline.Anand talked about this five years ago (http://www.anandtech.com/show/2493/3). AMD's Fred Weber said the cost of x86 decode was 10% of the CPU core (not cache!) in the days of the the original Clawhammer/Sledgehammer parts -- roughly 4-5 million transistors is what I've heard tossed about. With Haswell GT2 at 1.4B and Vishera at 1.2B, we're now talking about less than 2% of the total transistor budget being spent on x86 compatibility (because we have 4/8 cores to worry about, not just one). The amount of transistors being spent on caches that bring diminishing performance gains is ballooning, simply because we have "transistors to spare"!
JlHADJOE - Saturday, June 15, 2013 - link
You're implying that Power6 can't be used for general-purpose tasks. I'm pretty sure it can.woogitboogity - Monday, July 8, 2013 - link
You are not following me. My point is that all kinds of tricks can be used to jack up the clock speed without actually making any real improvements.Also, all x86 chips are not created equal in terms of architecture. Pentium 4 presided over an era where things were the opposite of what they were now. P4 clock speed was under-performing compared to AMD chips of the same GHz to the point that AMD had to market their chips by naming them after the P4 equivalent clock speed they out-performed. This was because P4 had an absurdly long pipeline relative to its design and no real scheduling/branch-prediction compared to today.
Again the fact that all they have to talk about here is clock speed makes the whole thing stink.
Klinky1984 - Wednesday, June 12, 2013 - link
Could you actually buy a POWER6 CPU on it's own? Wouldn't you have to buy a full solution from IBM to get access to POWER6? I think AMD would have been better to state it was "the first commercially produced 5Ghz processor available to consumers", which someone could spin better than what they wrote.Voldenuit - Thursday, June 13, 2013 - link
You can't buy the FX 9xxx either. It's only available to system integrators, presumably to stop DIY builders from setting their houses on fire...lwatcdr - Friday, June 14, 2013 - link
But companies that are not AMD can buy the part. With the Power 6 it was an IBM only part. Doesn't really matter over all unless the FX 9xxx offers an advantage in price vs performance over the i7 4xxx. It maybe a good solution for people that need single threaded performance over multi threaded but until benchmarks are run who knows?samlebon2306 - Monday, June 24, 2013 - link
Actually they bundle them with a fire extinguisher.JlHADJOE - Saturday, June 15, 2013 - link
You mean like this?http://www.upgradebay.com/Products/ProductInfo.asp...
patrickjchase - Monday, June 17, 2013 - link
IBM has hit 5+ GHz with z196 and zEC12 as well, and those implement the (very CISC-y) 360/370/390/zArchitecture instruction sets. I would also note that in a modern Tomasulo-based OoO processor the distinction between CISC and RISC comes down to a few extra decode stages. The OoO backend ends up looking pretty similar either way.AMD's specific marketing claim is "first commercially available 5 GHz CPU", so they appear to be drawing a distinction based on the fact that IBM doesn't sell POWER or zSeries processors individually or through retail channels. I imagine they had some fun negotiating that one with the legal department...
JDG1980 - Tuesday, June 11, 2013 - link
In context, it's obvious that AMD's claims only refer to x86 CPUs. No one cares who did what with some weird RISC architecture.FaaR - Tuesday, June 11, 2013 - link
In context it's clear that the press release is factually incorrect, and that you're trolling. That "weird" RISC architecture is a very well known big iron mainframe CPU, which quite a few people cares about. Even more people care about companies not lying about their accomplishments.bji - Tuesday, June 11, 2013 - link
"Nobody cares", "some weird RISC architecture". Yeah you're right, the world of computing just doesn't exist outside of the narrow realm of Intel processors released since 2010 and used by you and your teenage friends.lukarak - Wednesday, June 12, 2013 - link
It's not the same. They are not the same class of products. They do have a similar function, but couldn't be more different under the hood.It's like saying that Honda couldn't claim they had a naturaly aspirated engine with the highest HP 'density' for a while (2000ccm 240 PS iirc, but you can take any NA engine of the time) just because Mazda had its Wankel process engine in the RX-8 that only had 1.3L but also had 250 PS. It's just not the same thing, although they are both car engines, just as RISC and CISC CPUs are both CPUs.
maximumGPU - Friday, June 14, 2013 - link
that's a pretty accurate analogy!UltraTech79 - Saturday, June 15, 2013 - link
What are you, 9 years old? The only "nobody" here is you.rs2 - Tuesday, June 11, 2013 - link
No, let's be clear here. When most people hear "CPU" what they think of is "x86 CPU". And this is indeed the first commercially available 5 GHz x86 CPU.Bringing other architectures into the picture is not valid. But I'd agree to the extent that "CPU" should be explicitly qualified as "x86 CPU" in this case.
Hector2 - Friday, June 14, 2013 - link
Now that we know this is a 220W part (!), it's pretty clear that it's not for the mainstream market since it requires pretty heavy cooling. Seems to me that this was mostly a PR stunt -- "Hey, we're the first to 5GHz". Not much different from overclocking a standard CPU using liquid nitrogen or some other exotic cooling. Seems a waste of engineering resources to bring this outUltraTech79 - Saturday, June 15, 2013 - link
You would do very badly in court. "But your honor! Its clear that they MEANT something else even if they SAID this". Who gives a fuck what "most people" think of when they hear "CPU"? Fact > Fiction.arbit3r - Tuesday, June 11, 2013 - link
"we're also missing details on TDP, cache size, etc. but those will likely be the same as the FX-8350/8320"I doubt TDP will be same, rumored is over 200 watts which sounds right for an overclock of an AMD cpu.
MrSpadge - Tuesday, June 11, 2013 - link
"we're also missing details on TDP, cache size, etc. but those will likely be the same as the FX-8350/8320 (i.e. 4x2MB L2, 8MB L3, 125W TDP)"No way they're achieving these clocks at 125 W TDP! FX8350 already (almost) hits that mark.
JarredWalton - Tuesday, June 11, 2013 - link
Ah, but you're missing the point: FX-9590 has a maximum Turbo Core speed of 5.0GHz, which is 800MHz higher than FX-8350's max Turbo, but what's the base clock? Richland A10-6800K and Trinity A10-5800K both have the same 100W TDP, but Richland can clock 200-300MHz higher and it also has a higher GPU clock. With process refinements and tuning, we might have a base clock of 4.2 or 4.4GHz and the maximum 5.0GHz will only be hit in single-threaded loads, which would potentially be possible while remaining in the 125W TDP. Worst-case, I'd expect TDP to be 135W. We'll see what AMD has to say though....testbug00 - Tuesday, June 11, 2013 - link
Richland has a secret to allow clocks at lower power however.... I would tell you, but it is Charlie's to tell :/and link: http://techreport.com/news/24940/amd-intros-fx-959...
JarredWalton - Tuesday, June 11, 2013 - link
If they bump the TDP past 150W, then Bulldozer and its derivatives truly will be AMD's NetBurst. What amazes me is that Intel actually had the guts to can Tejas. Anand actually had a chance to run some early benchmarks on Tejas at one point, so it was basically complete and Intel realized that the part just wasn't going to be any good. I think the only way AMD fixes the Bulldozer architecture is if they seriously reorganize the pipeline at some point and stop going so deep, but I don't see that happening.Soleron - Tuesday, June 11, 2013 - link
Jarred, this is a halo part. Low production numbers, low yields, high TDP, SUPER high cost. Like expect $500 easy.I'm not sure why you think this is a Bulldozer replacement.
Hector2 - Tuesday, June 11, 2013 - link
I think it was an easy decision for Intel to make on Tejas in the end. The power numbers came in real high at the high frequency and they realized they could achieve better real performance at lower power and lower frequencies with a dual core. This became a "right hand turn" for them and setup the direction for all future architecturestipoo - Tuesday, June 11, 2013 - link
Really? I looked over all the Tejas stuff but it never appeared to get to the benchmark stage, unless he withheld them? In which case, even now that it doesn't matter at all, I would still absolutely love to see the benchmarks, surely Intel can't care now.JarredWalton - Tuesday, June 11, 2013 - link
I know Anand had a chance at some point to play with early silicon. I don't know if he still has any numbers, or maybe they just let him use it at a lab but no benches? I'll have to ask him....Hector2 - Wednesday, June 12, 2013 - link
What I was referring to you won't find outside of Intel because Tejas never got that far out of the gate. It was clear during the design phase in Austin it would take too much power to get to the single core MHz target they wanted. Dual core beat it.JDG1980 - Tuesday, June 11, 2013 - link
Why don't you think that AMD is going to reduce the pipeline length? I was under the impression that there were already some steps being made in that direction with Steamroller. The original Zambezi Bulldozer release was such a fiasco that it got AMD's management team canned, and the new management openly admits it was a failure. Stuff like this 5 GHz chip is clearly just a stopgap solution; in the medium term (next couple of years) they either need to fix their architecture so single-thread IPC is at least at Sandy Bridge standards, or they need to just ditch the construction equipment line of cores altogether and return to a K10 derivative (just as Intel ditched NetBurst for the P6-derived line of CPUs that continues to this day).silverblue - Wednesday, June 12, 2013 - link
I doubt that AMD can get SB level single thread performance with Steamroller. These cores are much smaller than Intel's; the real performance boost should be in multithreaded scenarios, though it shouldn't be difficult for Steamroller to, well, steamroll previous FX processors in even light workloads. Nehalem single threaded performance would be nice.bji - Wednesday, June 12, 2013 - link
If AMD can stay afloat for a few more years (and they've managed to do so under worse conditions than now so I don't see why they won't), they'll close the performance gap with Intel.Haswell has demonstrated that x86 is asymptotically approaching "the fastest per-core that it will ever be". The writing was already on the wall with Ivy Bridge, and now Haswell has put the definitive stamp on the issue.
It's simply not cost effective for Intel to pump the ever increasing R & D dollars, not to mention transistor count, necessary to advance x86 IPC significantly. The x86 market is not growing relative to other chip markets anymore thanks to mobile devices, and "more than good enough for 99% of uses" was already achieved by x86 some time ago. The just aren't that many dollars chasing higher x86 performance anymore and Intel can't continue to spend what is necessary to advance x86 speeds on a reduced performance dollar budget.
Given that, AMD now has time to play catch up. Intel is no longer going to be making each of its successive chip architectures significantly faster than the previous, so AMD is no longer chasing a moving performance target. Because AMD has not hit the same walls that Intel has hit in terms of IPC improvements yet, it has room to advance its performance more quickly than Intel does - until it hits the same walls in a couple of years.
At that point, AMD and Intel will have fairly similarly performing parts from an IPC perspective. Intel will still have superior process technology which will allow it to have a better high end story, although not nearly as much better as it has now. Intel will also have better thermals and power use due to superior process technology as well as generally better designs.
Even though AMD will eventually reach near parity with Intel on performance, it will not benefit from it the way Intel has. Because by that point, the x86 market will be even further cannibalized by mobile devices and AMD will be fighting for an ever larger piece of an ever smaller revenue stream. Intel already milked x86 for the best of what it was worth, from a profit perspective, over the past few years, and by the time AMD is in a position to significantly challenge Intel, the milking will be over. AMD will continue to do just well enough to stay afloat but will never make boatloads of money off of x86 like Intel has.
testbug00 - Wednesday, June 12, 2013 - link
Considering that overclocked piledriver uses 268 watts (from the source i heard) and everything i have seen point to a max 220W power usage it is amazing.and they could get it down to 150 (maybe) if the parts are similar to the Richland xxx5/xxx7 parts... i don't know :(
And my understanding is that Steamroller is a deep change in the bulldozer uarch.... :)
Also, Considering that netburst's power consumption is more from leakage (and i believe most of Bulldozer's is not) the issue is a lot more of the silicon it is made on....
on another note i met someone who apparently has run their 8350 at 1.7 volts since they got it (right when it came out) 0.0.... This scares me because AMD has been really conservative with voltages on Llano and Trinity.... If they are not conservative i think 150W might be possible for the part that turbos to 4.7 :)
JlHADJOE - Thursday, June 13, 2013 - link
Confirmed @ 220W:http://techreport.com/news/24953/amd-reveals-base-...
JarredWalton - Thursday, June 13, 2013 - link
Article has been updated. TechReport probably just got the same email I did. 220W. Ouch.JDG1980 - Tuesday, June 11, 2013 - link
The rumors have been saying 220W TDP. That sounds outrageous, but the rumors about "Centurion" have turned out to be right so far. That would explain why they are starting off by selling to system integrators only - if they sold a 220W chip through normal retail channels, it would result in a lot of fried motherboards (and maybe even house fires).JarredWalton - Wednesday, June 12, 2013 - link
Just for reference, because it's somewhat interesting: the highest ever TDP on a retail x86 CPU to date belongs to AMD's 140W Phenom II X4 965 BE. If we widen the search a bit, I don't know about RISC CPUs, but Itanium 2 had a maximum TDP of 150W, and that was an absolutely MASSIVE chip. I could see AMD going for 140W again (likely because they need the power to hit the clocks they're talking about), but more than that would be seriously crazy train.What's scary is the more I think about it, the more the "system integrator" stuff makes me think there might be credence to some of the 200W or higher rumors. It would also explain the bump to FX-9000 series ("9000 is for high power parts"), at least in part.
I really hope AMD doesn't try to go with a >150W part, and more than that I really hope most consumers are wise enough to not buy one, especially if they launch at significantly higher prices which is what I'm now hearing. Could you imagine $500 for the FX-9590? More than any Haswell or Ivy Bridge part, and while I suppose it would be competing against the SNB-E parts, I'm not sure what benchmarks it would win, even at 5GHz. Guess we'll know in the next couple of months.
lmcd - Wednesday, June 12, 2013 - link
Is it possible this is a die shrink? Like Apple moved their A5 from 45nm to 32nm quietly, could AMD have moved to 28nm without mentioning? After all, Steamroller is supposed to be 28nm GloFo which hasn't been used before.JarredWalton - Thursday, June 13, 2013 - link
This isn't Steamroller, though -- it's just Vishera redux. So it's 32nm -- AMD would be making a far bigger deal over getting to 28nm I'd wager.MrSpadge - Wednesday, June 12, 2013 - link
125 W should be enough for single threaded turbo, but loading all cores requires significantly more juice. That Trinity / Richland example is actually quite nice in this regard: they need 100 W even for 2 modules at 4+ GHz. Well, they can stay within reasonable TDPs if they just keep the base clock low enough...lmcd - Wednesday, June 12, 2013 - link
Don't forget the IGP in that Trinity/Richland example.Hector2 - Tuesday, June 11, 2013 - link
It'll come down to real performance metrics, price and power. AMD led the charge away from MHz years ago with a focus on actual performance, not MHz. Strange that they're pushing MHz again now.Khenglish - Tuesday, June 11, 2013 - link
Higher frequency isn't a bad thing. What's bad is killing your cache sizes and having ridiculously long pipelines to do it (pentium 4). This new cpu looks to be a new revision of piledriver that can just flat out run faster. I'm thinking it's somewhat like the old days where you see a cpu double in frequency from initial release to final stepping (P3 coppermine). I doubt AMD went over 125W since if they did absolutely everyone will blast them, just like intel was back in the late P4 days. Trying to market a 125W vs a 84W is bad enough, but say 150W vs 84W is getting silly.I see this cpu actually beating intel in some multithreaded applications. Of course 125W is a lot higher than 84W, and if you overclock the haswell up to 125W...
Hopefully steamroller will narrow the IPC gap enough that the higher clock speeds are enough to catch intel.
Mountainjoy - Tuesday, June 11, 2013 - link
Is it faster then my now 2 year old 2600k running at 4.8ghz? Has AMD caught up to 2011.djshortsleeve - Tuesday, June 11, 2013 - link
Still waiting for the urge to upgrade from 2500k.halbhh2 - Tuesday, June 11, 2013 - link
djs, if you're like me, where I want a 60% performance gain as a bare minimum vs my current chip, and not for $400+, but more like $200-, then there is more waiting to do (though in my case, I'm only waiting on price at this point).testbug00 - Wednesday, June 12, 2013 - link
that depends? if you use linux it caught up with overclocked bulldozer......if you use windoze, no.
Mountainjoy - Wednesday, June 12, 2013 - link
No sadly I dont fit into the .003% of users who'd actually gain any benefit at all from using Linux.testbug00 - Wednesday, June 12, 2013 - link
i believe it is more like a few percent... maybe..... XD ;)but the gains from windows to linux for BD uarch is crazy..... well, if both platforms are optimized, i have not seen anything with vanilla linux (no optimizations by user), that would be interesting!
nevertell - Wednesday, June 12, 2013 - link
The performance gains come from using software that is compiled with a non-biased and optimized compiler and a kernel scheduler, that actually is aware of the architecture of the cpu.ShieTar - Wednesday, June 12, 2013 - link
You did not buy that CPU at 4.8GHz. It's rather silly to expect a stock CPU to beat a massively overclocked CPU from only 2 years ago.JlHADJOE - Saturday, June 15, 2013 - link
Pretty sure Nehalem did that to Core2. And then Sandy did it again to Nehalem.In that regard, Haswell is pretty disappointing.
npp - Tuesday, June 11, 2013 - link
I think 200W TDP is unrealistic, I doubt there was ever a stock consumer CPU with such rating and it seems unlikely any manufacturer would try to push such parts as even "enthusiast" class. 140W may be a realistic target, or why not 125W even, as it was the case with Phenom II X6 series. They were able to slip in the same power envelope as older X4 parts, despite having two more cores.silverblue - Wednesday, June 12, 2013 - link
I imagine GF's 45nm process was significantly mature at the time X6 was due for them to make some decent power savings, however they were generally clocked below the X4 series.notorious1212 - Tuesday, June 11, 2013 - link
Here's more what I'm thinking:http://media.bestofmicro.com/Z/7/357667/original/i...
http://media.bestofmicro.com/X/K/357608/original/3...
Source: http://www.tomshardware.com/reviews/fx-8350-visher...
I am skeptical about the 125W TDP, but if that's the case, the performance gain from a base clock increase isn't that bad. I would probably invest in that. I don't have much hope for increased game performance. However, I think if the next gen consoles start bringing multi core friendly titles to the pc, AMD's parts might have more longevity and accuracy than they're given credit for.
JarredWalton - Tuesday, June 11, 2013 - link
As far as gaming goes, Ian has already shown that the vast majority of games are GPU limited at high quality settings until you get more than two GPUs -- provided you have at least an A10-5800K or FX-8350, your CPU is "fast enough" for nearly any game we're likely to see in the next couple of years.Traciatim - Wednesday, June 12, 2013 - link
That's only true for a certain subset of games, and generally those games are console ports (IE, simple CPU stuff, and 'high' settings means "just add more textures and launch it on PC!").Pretty much any grand scale PC game (like MMOs or RTS games) are heavily CPU bound, and in the vast majority of cases where CPU matters AMD falls flat on their face.
Though console ports will still probably run well, considering the CPU's in both the new consoles are low powered net-book style CPUs. So I'm sure there will be enough games for everyone to play.
I can only hope both the server and client for Planetary Annihilation have benchmarking tools, that would be great.
iamezza - Friday, June 14, 2013 - link
It all depends on the games. You can't conclude that the vast majority of games are GPU limited unless you actually test a large number of modern games. Ian only tested 4. There are a large number of games out there, often PC only multiplayer games, that are massively CPU bound.Check out this one : http://www.bit-tech.net/hardware/cpus/2012/05/01/i...
In Shogun 2 an overclocked i5-3570k gets triple the minimum FPS of a stock Phenom II X6 1100T (36FPS vs 12FPS!)
The gains in Arma II are large too at 80% higher minimum FPS.
flurazepam - Friday, June 14, 2013 - link
By the same token, an overclocked i5-3570K is twice as fast as an i7-3930K in Shogun 2 according to the graph you linked to. We know the 3570K doesn't stand a chance in multi-threaded anything against the 3930K. This clearly suggests this game is single threaded because the OC 3570 is also faster than the OC 3770K with the results you presented. To suggest they are CPU bound is disingenuous if they're only using 1 core. Moreover, the memory controller also features prominently in some games and this needs to be addressed as well.rocketbuddha - Tuesday, June 11, 2013 - link
Jarred,For now, the Next Generation CPU core also known as Steamroller will only find its way into APUs.
AMD has not given(yet) any indication as to when the Piledriver based FX MPUs will transition to Steamroller based FX series. So you might need to tweak the below statement like AMD tweaked the FX series ;-)
<quote>
Bulldozer was 1st Generation FX-series, Piledriver is 2nd Generation FX-series, and ahead we still have Steamroller (3rd Generation FX-series) and Excavator (4th Generation FX-series), but they’ve chosen a different route.
</quote>
But tell me this. Is the base clock the same 4.0 GHz for the FX-9590 ?? If 5GHz is turbo it is certain that certain cores are turned off guarenteeing that. Apparently AMD has been hiring marketing BS people from Samsung (of the Exynos "Octa" fame) while Samsung has been poaching AMD's engineering talent. ;-)
Cheers!
JarredWalton - Tuesday, June 11, 2013 - link
I took out the FX-series reference for you, since you're correct. As for the base clock, it remains (as yet) undisclosed, probably for good reason. And I'm not sure if AMD poached Samsung's marketing people or if they just hired the same marketing group that handled Intel's NetBurst clock speed frenzy. :-)DigitalFreak - Tuesday, June 11, 2013 - link
Can't have a race when only one person is running.polyzp - Wednesday, June 12, 2013 - link
AMD FX 9590 benchmarks leaked! See for yourselfamdfx.blogspot . com
silverblue - Wednesday, June 12, 2013 - link
Ha. If those were true, there'd be no point whatsoever for the 9590. Steamroller would obliterate it in nine or so months from now.If the 9590 had a basic clock of 4 to 4.2GHz, I could very much believe a TDP of 140W max, but no lower as turbo is per module as opposed to per core. However, from the various rumours floating about, there's little chance of a base clock that "low", meaning some significant power draw at load. Would this halo product actually use a proper implementation of CnQ in order to keep clocks and power down at idle?
Overclocking any CPU can dramatically raise power consumption; I imagine a 5GHz i7-2600K would suck down some power, but anything thereafter would drop that a fair bit.
larkhon - Thursday, June 13, 2013 - link
If those benchmarks are anywhere near true, the next thing we need to know is prices. Will it be on par with i7 4770k?it always funny to see the critics made about AMD and their 5Ghz CPUs. Even if AMD beats Intel performance-wise, they will always be people saying 'it does but consumption is that much higher'. Basically Intel is just changing the rules so that they always win.
And Intel is how many times bigger that AMD? it's hard to compete when you have far less money...
JDG1980 - Thursday, June 13, 2013 - link
No one's changing the rules. AMD had better products than Intel during the NetBurst era because they had better IPC and lower power consumption, even if the top-of-the-line Intel CPU sometimes beat AMD's in raw performance. In late 2003, Intel announced a modified Xeon CPU as the Pentium 4 Extreme Edition, which narrowly defeated the Athlon 64 in many benchmarks, but cost $999 and was widely panned by reviewers. Do a Google search for 'Pentium Emergency Edition' to see what people were saying about it at the time. This 5 GHz Vishera is basically the same thing coming from AMD: a stopgap solution which has high clock speed but low IPC, and terrible power efficiency and thermals. Intel only broke out of this pattern when they updated their core to Conroe (Core 2 Duo) and dropped their focus on high clock speed and long pipelines. AMD will have to do the same if they intend to compete. I know their focus isn't as much on the desktop as it used to be, but the construction equipment series is very ill-suited for portable devices, so it doesn't even make sense from that perspective. The only place the high-core, low-IPC, weak-FPU parts might have any advantage at all is in the server market - and they have not been very successful even there.HisDivineOrder - Wednesday, June 12, 2013 - link
Well, at least AMD is looking at the non-APU performance CPU segment. Even if this isn't the offering much of anyone wanted. It's nice to see AMD do some token competition.BSMonitor - Wednesday, June 12, 2013 - link
Aren't these the 200W parts we have seen rumors of??woogitboogity - Wednesday, June 12, 2013 - link
Ah for the days of Thunderbird... when being a rebel meant a faster and cheaper CPU. Now being a rebel just means you are on a budget and have to make due.For goodness sake Intel proved in spectacular fashion with the P4 architecture that clock speeds don't mean jack. Core2, Nehalem and everything that has followed have showed that architecture can be everything. Yet here we are...
AMD resembles the USSR more every year:
AMD: "You do something... we can do bigger!"
INTEL: "but... um... the point was to make it small..."
AMD: "I say point is make big! Strong like bull! Smart like tractor! High clock speed like Bulldozer!!!"
Guspaz - Thursday, June 13, 2013 - link
Let me just whip out my 1.21 jiggawatt processor I've got here...The TDP of this thing is insane (in a bad way). Ludicrously high TDPs to get minor clockspeed bumps (relative to power investment) smack of marketing BS.
beesbees - Thursday, June 13, 2013 - link
This is like Intel racing Amd with Intel being a few car lengths ahead but there is a dead end up the road. Play catchup AMD while you can before that dead end ends up to be a new access road and intel has more road to be ahead of you.Hector2 - Friday, June 14, 2013 - link
Over the years, Intel usually has had the advantage of being a process node (or 2 now) ahead of AMD. That makes it more difficult for AMD to compete in either performance or power. I give AMD credit for doing as well as they have until the last few years. Now, not having their own fab makes it more difficult stillJarredWalton - Friday, June 14, 2013 - link
Hey, they're still only one node ahead: 32nm to 22nm was Intel's last shrink. Only problem is Intel is due to roll out 14nm around the time AMD goes to 28nm, so AMD is only getting a half-node the next time. Of course, AMD (and GF) are behind on the 28nm transition already.silverblue - Friday, June 14, 2013 - link
I think they're relying on the high density libraries to bring down Excavator's power usage in line with another die shrink. It would help immeasurably if they brought that particular idea forward to Steamroller, really.JDG1980 - Friday, June 14, 2013 - link
Global Foundries can be blamed for some of AMD's woes, but not all. Intel manufactured Sandy Bridge on 32nm; if AMD had anything that good, I think there would be a lot less dissatisfaction with their offerings. The problem is that Bulldozer/Piledriver just has atrociously low IPC and power efficiency compared to Intel's last couple of generations.woogitboogity - Monday, July 8, 2013 - link
The funny part is that Intel could never afford to have AMD go under. The industry has been free of anti-trust lawsuits mainly because Intel busts out their "gimp" (AMD) as a counter-example.The cool thing about that though is that AMD will always have the resources to make use of chip design talent (especially something revolutionary). After all Intel's Core2 chips came out of nowhere and caught AMD flat-footed.
As much as we complain about it, business-wise AMD relies heavily on the APPEARANCE of being neck and neck with Intel. Intel can generally manage to outdo them in almost every way but I wonder whether they deliberately allow these sorts of moments to happen so that AMD does not get marginalized completely and put out of business.
glugglug - Thursday, June 13, 2013 - link
At 220W, I doubt the OEM solutions are air-cooled! Understandable that these are going to system integrators only --- that is just nuts! I'm thinking the only possibilities here for who is buying them is Alienware and VoodooPC.Quindor - Thursday, June 13, 2013 - link
Maybe they integrated a crazy strong GPU after their experience designing the SOC for both the PS4 and XboxOne? If OEM's build a proper box with closed loop cooling in it, it could enable some very interesting designs which would be harder if the heat was coming from two chips.Since the new console announcements I've actually been hoping to see some way to convert them back to 'normal pc' usage because they would make awesome cheap gaming rigs. And historically seen, all consoles ever released have had some alternative way of using them. ;)
JDG1980 - Thursday, June 13, 2013 - link
The FX series are plain CPUs with no integrated graphics. What you're looking for is Kaveri, which is coming later this year and is supposed to integrate a reasonably good GPU and have a homogenous memory architecture similar to what is on the new consoles.dragonsqrrl - Friday, June 14, 2013 - link
"So we're basically looking at a 76% increase in TDP relative to the FX-8350 to get a 19% increase in maximum clock speed. It's difficult to imagine the target market for such a chip"... my thoughts exactly.
silverblue - Friday, June 14, 2013 - link
Well... there's always those who think it'll overclock better than previous chips, but I can't imagine that being a particularly large market. Kind of leaves a bitter taste.I wonder what amount of power AMD expects this to use on average.
TheinsanegamerN - Sunday, June 30, 2013 - link
well, the 125 watt 8350 uses about 213 watts under maximum load with turbo enabled. thats a....58 percent increase. if the same applies for the 9000 series, that would be...347.6 watts.TheinsanegamerN - Sunday, June 30, 2013 - link
however, if amd uses slightly lower voltages, and more controlled turbos, uses higher binned chips, ece..my bet is about 300 watts.CharonPDX - Friday, June 14, 2013 - link
Wouldn't a race involve more than one competitor? Intel gave up on the GHz race when they decided to end Netburst at 3.8 GHz, and go dual-core (then Core-architecture) instead.Hector2 - Friday, June 14, 2013 - link
AMD's FX-9000 series 220W power demonstrates exactly why Intel (and AMD) had stopped focusing on MHz originally and started building multi-core with higher total performance and lower total power.Azethoth - Friday, June 14, 2013 - link
Aw, came here to read about magical MHz race breakthrough tech. Instead I get an article about how you can get high MHz with a chip as hot as the sun.I like speed, but I desire quit above all. This does not make things quiet.
ibilisi - Friday, June 14, 2013 - link
Wait, haven't we been through this already?I think I have a couple delta black label 60mm fans sitting around and a couple Alpha PAL heatsinks. Probably could mod them to fit. Or I could get out my full-on double panaflo (metal) fan watercooling setup with Chevy heatercore.
Give me a break AMD, why even release this? Realistically, releasing this chip is equivalent to AMD waving a white flag. The market has shifted to efficiency ACROSS THE BOARD, mobile devices, servers, HTPC, desktop you name it. A new GHZ race? Please, consumers won't fall for it this time.
Gabo91 - Friday, June 14, 2013 - link
You should stop reading at the third paragraph, everything else doesn't matter.Beenthere - Saturday, June 15, 2013 - link
I doubt the 220w is really TDP but more likely total power consumed, despite what some may say. If you OC an FX-8350 to 5.0 GHz. and measure the total consumed power it's closer to 165w so that is what the actual TDP is likely to be for comparison to the 125w TDP rating of the stock FX-8350. The claimed 220w TDP may be to justify not selling these CPUs to the retail segment?JarredWalton - Saturday, June 15, 2013 - link
Um, no. Not at all. If you overclock an FX-8350 to 5.0 GHz and do nothing with it and measure idle power, yes, it's probably around 165W. As soon as you put any sort of load on it, it will jump up to 300W or more. TDP is about the maximum power a chip will use under load, not about what it will use in a light workload. Surfing the web, it's probably going to consume 40-50W; playing a game or encoding a video, it will probably be very close to that 220W.I would bet they'll eventually sell the FX-9000 parts in the retail market as well (though potentially only as OEM chips rather than boxed chips). They would need a good motherboard and cooler though, which AMD can't necessarily guarantee on the motherboard side, so we'll see.
denb - Saturday, July 13, 2013 - link
Dear Jarred;To get 220 Watts @ 3 Volts requires 70 Amps,
How many stock PCs can supply that for just the CPU?
Thanks for your info, den
UltraTech79 - Saturday, June 15, 2013 - link
What does it matter?The FX series is AMDs Pentium 4.
denb - Tuesday, June 18, 2013 - link
Dear Jarred:Do You know the Max DDR3 supported? Their new A10-6800K can use up to 2133 Mhz. If this FX-9000 does not also, then it is kind of a waste of Watts!
Calinou__ - Thursday, June 20, 2013 - link
The FXs can also use 2133MHz RAM, but it is considered as overclocking.denb - Monday, July 1, 2013 - link
Dear Calinou; Do you mean, if the Mboard BIOS supports 2133 Mhz,then a FX-9370/9590 will see that and use it automagically ???
random2 - Thursday, June 20, 2013 - link
OK....Am I losing it?"We have now received the most important pieces of information from AMD..."
Not quite. Other pertinent information is also important like ....when is the product being released and when will it be available?
denb - Monday, July 1, 2013 - link
AMD Said: " In a few months. "tiagosilvamsn - Thursday, September 5, 2013 - link
Para mim tdp tao alto não me preocupa tenho um bom cooler a ar , espero que venha novas placas mãe para ele , aqui no Brasil falta placa mãe boa para amd ! e não tem um apoio grade em varejo de peças mesmo assim optei pela AMD e sempre continuarei utilizar seus produtos .