I was just going to say the same thing. I was all about AMD last year, but early this year I picked up an i5 2500K and was blown away by efficiency and performance even in a hobbled H67. Once I bought a proper P67, it was on. It's not that Bulldozer is terrible (because it isn't); Sandy Bridge is just a "phenom". If SB had just been a little faster than Lynnfield, it would still be fast. But it's a big leap to SB, and it's certainly the best value. AMD has Bulldozer, an inconsistent performer that is better in some areas and worse in others, but has a hard time competing with it's own forebearer. It's still an unusual product that some people will really benefit from and some wont. The demise of the Phenom II can't come soon enough for AMD as some people will look at the benchmarks and conclude that a super cheap X4 955BE is a much better value than BD. I hope it isn't seen that way, but it's not a difficult conclusion to reach. Perhaps BD is more forward looking, and the other octocore will be cheaper than the 8150 so it's a better value. I'd really like to see the performance of the 4- and 6- before making judgement.
It's still technically a win, but it's a Pyrrhic victory.
I tell friends that exact thing all the time. Phenoms are great CPUs but switch to Nehelam or Sandy Bridge and the speed is noticibly different. At equal clocks Core 2 Quads are as fast or faster.
Bulldozer ends up with a lot of issues fanboys refused to see even though Anandtech and other sites did bring it up in previews. I guess it was just hope and a understandable disbelief that AMD would be behind for a decade till the next architecture. We can start at clockspeed but only being dual-channel is not helping memory bandwidth. I don't think there is enough L3 and they most definitely should have a shortpipeline to crush through processes. They need an 1.4 to 1.6 in CBmarks or what is thhe point of the modules.
The module philosophy is probably close to the future of x86 but I imagine seeing Intel keeping HT enabled on the high-end SKUs. Also I think both of them want to switch FP calculation over to GPUs.
Yeah I agree. To me Bulldozer comes like 1 year late.
Its just not competitive enough and the fact that you have to make a sacrifice to single threaded performance for multithreaded when even the multithreaded isn't that good and looses to 2600K is just sad.
They needed to win big with Bulldozer and they failed hard!
It looks to me like BD is the CPU beta bug sponge for Trinity and beyond. Everybody these days releases a beta before the money launch.
Hence the B3 stepping...and probably a few more now that a capable fab is onboard with TSMC. BD is not a CPU like we're used to...its an APU/HPC engine designed to drive code and a Cayman class GPU at 28nm and lots of GHz...I get it now.
Also, the whole massive cache and 2B transistors, 800M dedicated to I/O, thing (SB uses 995M total) finally makes sense when you realize that this chip was designed to pump many smaller GPGPU caches full of raw data to process and combine all the outputs quickly.
Apparently GPUs compute very fast, but have slow fetch latencies and the best way to overcome that is by having their caches continously and rapidly filled...like from the CPU with the big cache and I/O machine on the same chip...how smart..and convenient...and fast.
I don't see how this can be considered an APU, This product isn't being marketed as a HPC proc., and i don't see the benefit of this architecture design in GPGPU environments at all.
It's sad...i've always given major kudos to AMD. Back in the days of the Athlon's prime, it was awesome to see david stomping goliath.
But AMD has dropped the ball continuously since then. Thuban was nice, but it might as well be considered a fluke, seeing as AMD took a worthy architecture (Thuban) and ditched it for what's widely considered as a joke.
And the phrase "AMD dropped the ball" is an understatement.
They've ultimately failed. They havent competed with Intel in years. They...have...failed. After thuban came out i was starting to think that the fact that they competed for years on price and clock speed alone was a fluke, and just a blip on the radar. Now i see it the opposite way...it seems that AMD merely puts out good processors every once in a while...and only on accident.
By badmouth do you mean objectively tell the truth? Do you blame PCMark or FutureMark for any of that? Perhaps if all the tests just said that AMD was clearly better, it wouldn't be badmouthing anymore.
Slightest "problem" imaginable with AMD GPUs would make it into titles.
nVidia article would go with comparing cherry picked overclocked board vs standard from AMD, with laughable "explanations" of "oh nVidia marketing asked us to do it, we kinda refused but then we thought that since we've already kinda refused, we might still do what they've asked".
I recall low power AMD CPUs being tested on 1000Watt PSUs on this very site. How normal was that, cough? iPhones "forgoten in pocket" (authors comment) on comparison photos where they would look unfavourably)
Thing with tests is, you have games that favour one manufacturer, then other games that favour another. Choose "right" set of games, and viola...
The move with 1000Watt PSU on 35W TDP CPU is TOO DAMN LOW and should never happen.
On top of it, absolute majority of games is more GPU sensitive, than CPU sensitive. Now one could reduce resolution to ridiculously low levels so that CPU becomes a bottleneck. but then, who on earth would care whether you get 150 or 194 frames per second at a resolution which you'll never use?
Not sure what the deal is with PSUs or what article you're referring to. I'm assuming it made AMD power consumption look worse than it was because 1kW PSU was running at 10% load, thus way out of efficiency range. But w/e. My comment is mostly on CPU performance in games. Just because you don't run a game on the top-end CPU with $800 in multi-gpu tandem at lowest settings, doesn't mean it shouldn't be used to determine CPU performance. By making the CPU the bottleneck, you make it do as much as it can side-by-side with the GPU spiting out frames while whistling tunes and picking it's finger nails. There is more load on CPU than GPU. Which ever CPU is faster - that CPU will provide more FPS. Simple as that. Sure, no one will see 20%-30% performance difference using more appropriate resolution and quality settings. But we're enthusiasts, we want to see peak performance difference and extreme loads. Most synthetic tests are irrelevant in everyday use, but performance has been measured that way for decades.
I haven't seen one single sentence that was questionable in a and graphics review. In fact I'm glad to say that I'm a big fan of Intel CPU and and hour combos, and have never had even as much as a hint of bias.
As a over exaggeration, in an age where were all stuffing multiple cards in our systems, and cards are efficient, reliable, powerful, and they run cool. yes the drivers have sucked in the past, but they don't really.
(emphasis on the word seem)
NvIdia cards have just seemed clunky and hot as hell since the 400 series. I don't feel like gaming next to a space heater. And I definitely don't want to pay 40 percent more for ten percent performance just to have a space heater and bragging rights.
its like amd graphics are similar to intels CPU lineup, they're great performance per dollar parts, and they're efficient. But NvIdia and Intel graphics are like amd CPUs, they're either inefficient, or they're good at only a few things.
The moral? what the *$&* amd....you might as well write off the whole desktop business if the competition IS fifty percent faster and gaining ground....that 15 percent you're promising next year better be closer to 50 or I'm going to forget about your processors altogether.
40% more cost and 10% more performance? You said that's across the board. I'm certainly glad you aren't the reviewer here on anything. I mean really that was over the top.
I think your later point is exactly why the FPU support isn't as strong. (most) tasks that use FPU appear to be operating on large matrices of data, while sequential processing seems to have a good design idea (even if the implementation is a little immature and a little early), but slower latency l1/l2 cache access. I hope that's an area that will be addressed by the next iteration.
ckryan, you stated you were blown away by the 2500K yes? It's odd you know.. I've owned a PII 920, PII 1055, PII 955 (tested lots of lowbie $60-80 parts from AMD to) .. also used a i7 920, i7 955 i5 2500k i7 2600k (my most recent one) and .. I am not blown away by any of them..
Last time I was blown away by a cpu was the Q6600..(before that the A64 3200+) since then other cpu's have been better but not so much so that I'd say that it was night and day differences.
Ok that was some enormously skilled twisting and spinning. BD is an epic failure, period. I can't envision anyone with any needs, need, or combo thereof choosing it.\ It's so bad amd lied about it's transistor count. Forget it, it's an epic fail and never anything more.
Ugh, BD is quite the disappointment. The power consumption is absolutely through the roof -- unacceptable for 32nm, really!
With that said, I am very intrigued in the FX-4100 4-core 3.6GHz part. This should be the replacement for the Athlon II 2-4 core series, and I'm very interested to see how it does vs ~3 GHz Athlon II X2's, X3's and X4's.
Wow ... I'm blown away. I have been waiting for BD's reviews and benchmarks for months. I have waited for BD for my new rig. I have used AMD for the past 8 years and I am ... was convinced that it always offered, by far, the best price/performance ratio for entry-level, mid range PCs. I am a still a big fan of AMD ... but I have to stand corrected. BD is a POS. Longer pipelines? Didn't they learn anything from Pentium 3/4 debacle? A Phenom II X6 is almost always better than BD, even in power consumption. Come on: if BD had come out shortly after the Phenom I could see it as an incremental improvement, a new baseline to build upon. But it took AMD years to come out with BD ... and this is the result? Disappointing. I mean, betting everything on higher clock frequencies? At 4GHz? It's no wonder that Intel's IPC improvements are crunching BD: IPC is all about doing more with the same power, clock speed is all about throwing more power to do the same faster ... Boy. This ruined my day.
By the way, no matter how AMD slices it, I see the FX-8* as a 4-core CPU. A glorified ohene, but still a 4-core. If I was AMD, I would have considered a fair goal to obliterate the i5-2500 performance with the new FX-8 family, instead it comes short most of the times. What were they thinking?
Yep this really is extremely disappointing. I'm actually going to call this AMD's Pentium 4. Thats how bad this is.
2 billion transistors - thats a massive increase over the Phenom II X6 and what do we get? Nothing. The Phenom II is atleast as good with WAY less transistors and lower power consumption under load. I'm pretty shocked at how bad Bulldozer is. I wasn't expecting performance clock for clock to be as good as Nehalem, let alone Sandy Bridge, but this is just... appalling. When Ivy Bridge is out the performance difference is going to be MASSIVE.
Intel are surely going to implement more restrictions and hold there clocks speeds back even further. Theres just no competition anymore. Sad day for consumers.
AMD's Prescott to be exact... ironically that's one thing they seemed to shoot for in deepening pipeline and hoping that process would be better... hopefully this is just immature and soon there will be a GF110-style refresh that does it properly...
Otherwise the whole next gen of AMD CPUs will continue to fight for scraps at the bottom of the heap... and their laptop CPUs will not even succeed there.
I don't even know if it's just the process since those power consumption figures seem to suggest that they're being limited by the sheer amount of power it's using and the heat being generated from that. Intel had planned to take the P4 to 10Ghz but the fact that it was a power hog prevented that from realistically happening and it seems like you have the same issue here. The clockspeed potential is clearly there since it can hit 7Ghz under liquid nitrogen but for a normal air heatsink setup this is a recipe for failure. It's just way too power hungry and not fast enough to justify it. Why would anybody choose to use an extra 100 watts for largely the same or worse performance vs an i5 2500K?
I agree, 2 billion transistors are doing what exactly?
The worst thing is that the water cooler isn't included with the FX-8150. At the performance levels they are providing, they should have just upped the price 30-50 bucks and provided the cooler gratis. Who's gonna need an AMD branded cooler if their not going to buy bulldozer?
The other point of these review is that there is no availability of any of the parts. So what a wonderful paper launch we have here. Seems like AMD isn't betting on anyone being interested enough to buy one of these things.
Sigh...it's quite sad. There must be actual people buying these...either that or the supply is terrible. Because there's no way in hell i'd pay those prices for an AMD processor.
Like most people here, I 'm disappointed with BD performance -- even though I have never owned an AMD CPU after my 386DX/40 myself, competition in the performance segment would be nice for a change.
I won't argue against "it's not 8 core", but calling it a 4-core is IMHO just as inappropriate (if not more).
Ok, how about 4 modules, with 8 integer EU, 4 fetch, 4 decode, 4 L2 caches ... Point being, they are 4 modules, not 8 cores, and from many aspects, they are more similar to a 4-core CPU than to an 8-core CPU, being neither one (somewhere in between).
The fact of the matter remains: the IPC is bad. In multi-threaded, Integer-intensive tasks, BD should crunch the PhenomII X6 (2 more cores, higher clock speed), but it seems you can hardly see the difference. (ref: Excel 2007 SP! MonteCarlo sims).
AMD now is left with Llano as the only compelling reason to buy AMD over Intel (for netbooks and small notebooks, where Atom is the contender). Against Core, either the FX-8150 goes down to $200 or less, or the i5-2500 is just a better buy for the money. The advantage is I don't need a new MoBo (huge advantage for me, but not very compelling, in general).
Forgot to mention, regarding the integer-intensive test: the core-i5 is slower by about 9% slower with 9% slower clock, but only 4 execution units (8 logical, with hyperthreading, but hyperthreading should be nearly irrelevant in this test). What a blow.
We can argue about weather its really a 4 core or an 8 core, and the argument is interesting from a technical standpoint. But the proof is in the real world benchmarks. From a practical standpoint, if the benchmarks are not there (and they aren't) then the rest really doesn't matter.
I looked on Microcenter where you can get a 2600K for $279 and a 2500K for $179. An i5-2400 is only $149. So AMD is going to be right back to having to cut prices and have its top end CPU go up against $149 - $179 Intel parts. Worse yet, it will, at least initially, be competing against its own previous generation parts.
There is one point of interest though and that is the fact that all the FX's are unlocked (according to the story). So it's pretty likely that an FX 8100 will probably overclock about as high as an 8150 once the process is mature. But there again, among overclockers, AMD could find its highest end 8150 competing against its own lower priced 8100.
Back in the day, I loved my Athlon 64's and 64 x2's and even though I have switched to an Intel Q6600 and then a 2600K, I still really want AMD to succeed...but its not looking good.
Yeah I paid $179 for my i5 2500K and it hums along at 4.8Ghz (can hit 5Ghz+ but I wanted to keep the voltages reasonable). Clock for clock bulldozer is slower since it's only competitive when the higher clocked part is compared to a stock 2500K.
Their cores offer, what 75% the speed of a normal core?
The fact is, this supposed "8" core processor performs worse than AMDs own 6 core processor. There's no way we can get away with calling it an 8 OR a 6 core.
You took the words right out of my mouth! I am a big AMD fanboy, and I was waiting with baited breath to jump on the bulldozer bandwagon for my next rig (and I probably still will). But this is ridiculous! I'm a computer engineer and where the hell were the simulations AMD? Seems like you could have halved the L3 and kept in the extra FP resources and been better than what you are doing now.
Also, don't bitch about that Windows 7 doesn't realize the architecture of Bulldozer, you knew that 18 months ago, so you should have been writing a patch so that would have been a non issue.
The absolutely, positively only reason i will by an 8150-FX is that my current desktop is a dual core Athlon running at 2.2GHz. So to me, the performance increase over my current desktop would be massive. But on second thought, if I have stuck with such a slow system this long, I might another 3-5 months for Piledriver.
<i>The power consumption is absolutely through the roof -- unacceptable for 32nm, really!</i>
Uhh, you did see the bar graph for idle power usage, right? And keep in mind this is an 8-core CPU compared to 4- and 6-core competitors.
Like you, I'm also very interested in the 4- and 6-core Bulldozers. Anand let us down by only reviewing the flagship Llano. Hopefully he doesn't do the same with Bulldozer.
What Anand reviews is mostly down to what AMD will let him have -- even sites the size of Anandtech don't simply get to call and order parts from a catalogue for review samples.
I read that 'at 1920x1200/1080 the gaming performance depends much mure on the GPU. Anyway, I'm happy with my i5-2500k ;-), Bulldozer does not seem to worth the wait.
Kinda what I was thinking. When they are all developing games for a 6 year old 3 core PowerPC system with 512MB RAM (xbox) instead of a computer, its no bloody wonder.
Well, why would you target the variable PC segment when you can program for a well established, large user-base platform with a single configuration and make a ton more money with probably far less QA work since there's only one set (two for multi-platform PS3 games) of hardware to test?
And it's not like 360/PS3 games suddenly look like crap 5-6 years into their cycles. Think about how good PS2 games looked 7 years into that system's life cycle (God of War 2). Devs are just now getting the most of of the hardware. It's a great time to be playing games on 360/PS3 (and PC!).
Consider what AMD is and what AMD isn't and where computing is headed and this chip is really beginning to make sense. While these benches seem frustrating to those of us on a desktop today I think a slightly deeper dive shows that there is a whole world of hope here...with these chips, not something later.
I dug into the deal with Cray and Oak Ridge, and Cray is selling ORNL massively powerful computers (think petaflops) using Bulldozer CPUs controlling Nvidia Tesla GPUs which perform the bulk of the processing. The GPUs do vastly more and faster FPU calculations and the CPU is vastly better at dishing out the grunt work and processing the results for use by humans or software or other hardware. This is the future of High Performance Computing, today, but on a government scale. OK, so what? I'm a client user.
Here's what: AMD is actually best at making GPUs...no question. They have been in the GPGPU space as long as Nvidia...except the AMD engineers can collaborate on both CPU and GPU projects simultaneously without a bunch of awkward NDAs and antitrust BS getting in the way. That means that while they obviously can turn humble server chips into supercomputers by harnessing the many cores on a graphics card, how much more than we've seen is possible on our lowly desktops when this rebranded server chip enslaves the Ferraris on the PCI bus next door...the GPUs.
I get it...it makes perfect sense now. Don't waste real estate on FPU dies when the one's next door are hundreds or thousands of times better and faster too. This is not the beginning of the end of AMD, but the end of the beginning (to shamlessely quote Churchill). Now all that cryptic talk about a supercomputer in your tablet makes sense...think Llano with a so-so CPU and a big GPU on the same die with some code tweaks to schedule the GPU as a massive FPU and the picture starts taking shape.
Now imagine a full blown server chip (BD) harnessing full blown GPUs...Radeon 6XXX or 7XXX and we are talking about performance improvements in the orders of magnitude, not percentage points. Is AMD crazy? I'm thinking crazy like a fox.
Oh..as a disclaimer, while I'm long AMD...I'm just an enthusiast like the rest of you and not a shill...I want both companies to make fast chips that I can use to do Monte Carlos and linear regressions...it just looks like AMD seems to have figured out how to play the hand they're holding for change...here's to the future for us all.
I think you bring up a very good point here. This chip looks like it's designed to be very closely paired with a highly programmable GPU, which is where the GPU roadmaps are leading over the next year and a half. While the apples-to-apples nature of this review draw a disappointing picture, I'm very curious how AMD's "Fusion" products next year will look, as the various compute elements of the CPU and GPU become more tightly integrated. Bulldozer appears to fit perfectly in an ecosystem that we don't quite have yet.
Exactly. Ecosystem...I like it. This is what it must feel like to pick up a flashlight at the entrance to the tunnel when all you're used to is clubs and torches. Until you find the switch, it just seems worse at either...then viola!
Wow I hope that made you feel better about the crappy chip also known a "Man With A Shovel" I was just hoping AMD would quit forcing Intel to have to keep on crippling their chips, just to keep them from putting AMD out of business. AMD better fix this abortion quick, this is getting old.
Feeling fine. Not as good in the short run, but feeling better about the long run. Unfortunately, due to constraints, it takes AMD too long to get stuff dialed in and by the time they do, Intel has already made an end run and beat them to the punch.
Intel can do that, they're 40x as big as AMD. Actually, and this may sound crazy until you digest it, the smartest thing Intel could do is spin off a couple of really good dev labs as competitors. Relying on AMD to drive your competition is risky in that AMD may not be able to innovate fast enough to push Intel where it could be if they had more and better sharks in the water nipping at their tails.
You really need eight or more highly capable, highly aggressive competitors to create a fully functioning market free of monopolistic and oligopolistic sluggishness and BS hand signalling between them. This space is too capital intensive for that at the time being with the current chip making technology what it is.
Just to be the devil's advocate ... The launch event in London sported 2 PC, side by side, running Cinebench. One had the core i5-2500k, the other the FX8150. Of course, these systems are prepared by AMD, so the results from Anand are clearly more reliable (at least all the conditions are documented). Nevertheless, it is clear that in the demo from AMD, the FX runs faster. Not by a lot, but it is clearly faster than the i5. Video: http://www.viddler.com/explore/engadget/videos/335...
Even so, assuming that this was a valid datapoint, things won't change too much: the i5-2500k is cheaper and (would be) slightly slower than the FX8150 in the most heavily threaded benchmark. But it would be slightly better than Anand's results show.
"Nevertheless, it is clear that in the demo from AMD, the FX runs faster. Not by a lot, but it is clearly faster than the i5."
Check the review, cinebench r11.5 multithreaded chart. Anand's numbers mirror the ones by AMD. Multithreaded workloads are the only case where the 8150 will outperform an i5 2500k because it can process twice the amount of threads.
Really disappointed in AMD here, but I expected subpar performance because it was eerily quiet about the FX line as far as performance went.
Desktop BD is a full failure, they were aiming for high clock speeds and made sacrifices, but still failed their objective. By the time their process is mature and 4 GHz dozers hit the channel, Ivy bridge will be out.
As far as server performance goes, not even sure they will succeed there. As seen in the review, clock for clock performance isn't up compared to the prvious generation, and in some cases it's actually slower. Considering that servers run at lower clocks in the first place, I don't see BD being any threat to intels server lineup.
4 years to develop this chip, and their motto seemed to be "we'll do netburst but in not-fail"
It's not but people don't buy CPUs for today's games, generally you want your system to be future proof so the more extra headroom there is in these CPU benchmarks the better it holds up over the long term. Look back at CPU benchmarks from 3-4 years ago and you'll see that the CPUs that barely passed muster back then easily bottleneck you whereas CPUs that had extra headroom are still usable for gaming. For example the Core 2 Duo E8400 or E8500 is still a very capable gaming CPU, especially when given a mild overclock and frankly in games that only use a few threads (like Starcraft 2) it gives Bulldozer a run for the money. I'm not a fanboy either way since I own that E8400 as well as a Phenom II (unlocked to X4, OC'ed to 3.9Ghz) and a i5 2500K but if I was building a new system I sure as heck would want extra headroom for future-proofing. That said? Of course these chips will be more than enough power for general use. They're just not going to be good for high end systems. But in a general use situation the problem is that the power consumption is just crappy compared to the intel solutions, even if you can argue that it's more than enough power for most people why would you want to use more electricity?
Good points TekDemon. But I'll add that from what I understand, the GPU might be capable of processing huge amounts of graphic information, but might have to wait for the CPU to process certain information before it's able to continue, hence some games going only so high in graphic tests no matter what kind of GPU is put in.
Like he said, buying a good CPU will last longer than spending that money on a really good GPU. I personally try to build a balanced system since by the time I upgrade it's a pretty big jump on all ends.
Well, I'm happy that gamer children cannot understand the point of this architecture. You obviously have no concept of the architectural advantages, since it's not designed for game-playing children or completely unoptimized synthetic benchmarks.
Bulldozer optimized and future-proofed for professional software, rather than entertainment software for children.
Who exactly is the child here? Your infantile comment conveniently ignores the fact that AMD has made gigantic marketing pushes that are clearly directed at the gamer community, not to mention gaming-related sponsorship activities and marketing tie-ins. So, on the contrary, the company has made very visible and consciously-directed efforts to appeal to gamers with its products. It is totally unreasonable to now posit that BD is not directed at least in part toward that market segment.
before reading (and i've read all other review sites -and disappointed at AMD-, just dying to see your view on the matter) thanks for the review as always
You are not measuring "branch prediction" performance. You are measuring misspeculation penalty (due to longer pipeline or other reasons). Nothing can "predict" random data-dependent branches.
Cannot see a reason to wait for Piledriver. Am3+ won't survive that chip, and +15%, even in single thread, won't be enough (for Sandy, I'm not even talking about Ivy).
If BD had not been so bad i would have hoped in a price drop of the Thuban, and would have gone for it. But now, i fear price spikes of the old Phenom II X6 as it approaches it's EOL.
... using a chainsaw. Newegg sells a 2500k for USD 220. I'm thinking something like 170-180 for the FX-8150. I was expecting a lot from the FX line. And I think that was my mistake, probably. Too bad.
I guess we can take comfort in that some things never change - naming AMD processors are always behind the curve (since before Intel's C2 Duo). Guess I'll hang onto my X4 955 @ 3.6 Ghz for a while longer. It'll be the last AMD processor I'll bother with (and I'm tired of being faithful and waiting on them).
""At the same clock speed, Phenom II is almost 7% faster per core than Bulldozer according to our Cinebench results.""
I am far from being an expert in CPUs but isn't the main advantage intel has had since core2- sandybridge the per core performance? not closk speed and not multi core.
I've seen some benchmarks showing real world usage of the SB i3 dual core where it out performs a faster clocked quad core phenom 2.
Meaning AMD giving first priority to clockspeed and core count was the wrong thing to aim for even if they had achieved a 4ghz+ stock 8 core speed processor, but to actually go backwards compared to such an old arch. is a disaster. (my first post here, is there a way to edit posts?)
The thing is that Phenom II, which is AMD's arch, is FASTER clock for clock than their new Bulldozer arch. Intel is far ahead of both CPUs, but it's a bit laughable that AMD's older CPUs actually outperform their new ones.
Hey Anand, did you happen to get the power consumption numbers when you hit 4.7ghz?
This is... disappointing. I knew the Single thread benchmarks were going to be bad, but you need to be running something thats needing the 8 cores, if not its of no use. Kinda like using a Magny Cours to run Crysis.
Ignoring the power consumption it seems to me that @4.6GHz it should start being quite competitive. So can we expect base clocks to rise once significant volume of these chips starts getting out and GloFo refines the process? I also must admit I didn't expect 2 bn transistors. All the time AMD was bragging about how much they saved and then we get this behemoth. No wonder they have process issues. Such big chips always do.
Well it is an 8-core, not a 4 core. 2x 995M (Sandybridge 4C) almost 2B, though I am sure the multply isn't exactly correct. A lot of it depens on the L3/L2 RAM amounts. The savings seem to be minimal.
I am still confused about why they so deliberately chose to go with a relatively low single thread performance. My main application is multithreaded, but since it's such a mixed bag overall I am pretty unsure if this will be my next CPU, unless I get to see convincing Cubase 6 benchies. For an FX moniker it needs to perform better than this anyway.
I'll throw in a lyric from The Fixx "It doesn't mean much now, it's built for the future."
this might be the final nail in the coffin. We might have to wait longer for it to be competitive? People have literally been waiting for -years- for amd to catch up. probably by the time piledriver(or whatever it'll be called) comes out, ib will be out (and even further behind intel)
btw I think tomshardware tested it with windows 8 and it was still a turd.
I seriously hope you can get some answers/reasons why amd released such a woeful product. Maybe this was why dirk was fired? All I know is after 7+ years of amd, my next processor will be intel
Desktop CPU's are Halo parts and as such are irrelevant. It's the Server and OEM Laptop CPU's were AMD needs to perform and AMD's server share just keeps dropping.
FWIW when the Athlon64s first came out, we bought a bunch of them, they were not bad, but there were clock issues - the TSCs weren't synchronized. So had to set idle=poll (and thus using more watts).
You can blame the OS developers, but most people buy new hardware to run existing operating systems and programs on, not future unreleased ones.
It sure is looking bad for them. I won't be buying AMD CPUs but I hope the fanboys keep them alive ;).
Sun went an even more extreme route regarding FP performance on its Niagara CPUs - as far as I remember, the first generation chip had a single FPU shared across eight cores. Performance was not even close to a dual-core Core 2 Duo at that time. So that was what I though when I first read about the "module" approach in Bulldozer maybe an year ago - man, this must be geared towards server workloads primary, it will suffer on the desktop. I guess FPU count = core count would have be more appropriate for the FX line.
Would this be a good candidate for web server applications because of its excellent multi-threaded performance? How about to host a bunch of Virtual Machines?
I've also been wondering if running a lot of VMs would work better on this CPU. But I don't really know how you'd benchmark that kind of thing. Time and total energy consumption to serve 20,000 web pages from 12 VMs?
This processor is worse than the Phenom II X6 for most of my workloads. My next machine will be Sandy/Ivy Bridge.
But... we haven't seen this clock ramp up yet. As Anand mentions on page 3 - Remember the initial Pentium 4s? The Williamette 1.4 and 1.5 ghz processors were clearly worse than the competition, to say nothing of the PIII line. In time the P4 consistently beat the much higher IPC AMD processors on most workloads, especially after introducing Hyper-threading. This really does feel like a new Pentium IV! Trying a design based on clock speed and one-upping Intel's hyperthreading by calling 4 '1.5' cores 8 (we hyperthread your hyperthreading!) - it will be a wild ride.
At this point, I don't see anyone beating Intel at process shrink and they're a moving target. But competitive pricing, quick ramp up and a few large server wins can still save the day. Dream of crazy clockspeeds :-)
- Expect to see Bulldozer targeted towards servers and consumers who think "8 cores" sounds sexy, at least until clockspeed ramps up.
- Processor performance is not the limiting factor for most consumer applications. AMD will push APUs very heavily, something they can beat Intel at. Piledriver should drive a good price/performance bargain for OEMs, and for laptops may have idle power consumption in shouting distance of Sandy Bridge.
I'm more optimistic about AMD now. But my next machine will still be Sandy Bridge / Ivy Bridge.
I see people that say that they'll be waiting for Piledriver. Why not wait for AMD Drillbit, or AMD Dremel? How about AMD Screwdriver or AMD Nailpuller? Tomorrow my 2600K arrives. I'm done. I had a build ready with a ASUS 990FX ready for Bulldozer, but I will "bulldoze" the part back to NewEgg.
I must admit, I was worried when I saw the large amounts of L2 cache before the launch. AMD engineers must have been taking the summer off, and decided to throw more cache at the problem. AMD needs a new engineering team. Why the hell can Intel get it right and they can't?
AMD, your CPU engineers are lazy and incompetent. I mean, it only took you "only" four years to get your own version of the Pentium 4.
The bottom line is that its time to fire your lazy retarded and incompetent engineers, and scout for some talent. That's what every other company does that wants to succeed, regardless of the industry. I mean, look at KIA and Hyundai for example, they went out and hired the best designers from Audi and the best engineers they could buy with money. Throw some more money at the problem AMD and solve your problems. And if those lazy fat fucks in Texas that you call engineers don't deliver, look somewhere else. Israel or Russia maybe? Who knows... Just my 2 cents.
I know nothing of AMD employee's work ethic, but...their problems may have nothing to do with raw technical talent. But you are right about one thing - throwing money at a problem can be helpful, and that's likely why Intel has succeeded for so long. Intel has a lot of cash, and a lot of assets (such as equipment). They can afford the best design/debugging tools (whether they buy'em or make'em), which makes it much easier to develop a top product given the same amount of microchip engineering talent.
And just because they're based in Texas doesn't mean their staff is all-American. Like most US tech firms, quite a bit of their talent was probably imported.
Just skimmed the review; not as awesome as I had hoped for, sadly. That being said, I'm thinking it might well be a nice improvement for the stock, C2D Q6600 in my Dell. I could go Intel, obviously, but... I dunno. I've got an odd fascination with novel things, even if they are rough to begin with. Hell, I've even got a WP7 phone :p
Then make sure to get a quality power supply and motherboard to go with it. Also, your power bill will increase, but not directly from the Bulldozer CPU, nope, but from all the heat that it will make... you will need to run your air conditioner which is a power hog.
/* Patiently waiting for AMD's next gen architecture codenamed "Bendover" */
Is 'ScrewdOver' next on the roadmap after 'Bendover'? I'll have to look in the official AMD leaked slide repository.
I still think some intrepid AMD faithful will try BD out just because they're wired that way, and many of the are going to like it. I bet it compares better to Lynnfield than Sandy Bridge... Except Ivy Bridge is closer in the future than SB's launch is in the past. This could be an interesting and relevant product after a few years, but the need is dire now. AMD is going to kill off the Phenom II as fast as possible.
AKA, the reason all of us who are commenting are reading this review. Gaming performance. And AMD chose not to even compete there. Bunch of monkey overs at AMD CPU engineering?
It's now a non starter in the enthusiast market.
I've often though recently that AMD (or any manufacturer really, but AMD as a niche filler would be a more obvious choice given their market position) would do well to try to position itself as the gamers choice, and even design it's CPU's to excel in gaming at the expense of some other things at times. I really suspect this strategy would lead to a sales bonanza. Because really the one area consumers crave high performance is pretty much, only gaming. It's the one reason you actually want a really high performance CPU (provided you dont do some sort of specialized audio/video work), instead of just "good enough" which is fine for general purpose desktoping.
Instead they do the exact opposite with Bulldozer, facepalm. Bulldozer is objectively awful in gaming. Single handedly nobody who posts at any type of gaming or gaming related forum will ever buy one of these. Unbelievable.
Perhaps making it even more stinging is there was some pre-NDA lift supposed reviewer quote floating around at about how "Bulldozer will be the choice for gamers" or something like that. And naturally everybody got excited because, that's all most people care about.
Combine that with the fact it's much bigger and hotter than Intel's, it's almost a unmitigated disaster.
This throws AMD's whole future into question since apparently their future is based on this dog of a chip, and even makes me wonder how long before AMD's engineers corrupt the ATI wing and bring the GPU side to disaster? The ONLY positive thing to come out of it is that at least AMD is promising yearly improvements, key word promising. Even then absolute best case scenario is that they slowly fix this dog in stages, since it's clearly a broken architecture. And that's best case, and assumes they will even meet their schedule.
Anand lays so much of the blame at clockspeed, hinting AMD wanted much more. But even say, 4.3 ghz Bulldozer, would STILL be a dog in all important gaming, so there's little hope.
I have used many AMD systems. Have deployed 1000 of AMD CPU inside Unix workstations at my old work. I cheer for AMD. But. AMD is going to have a hard time ahead. Selling its fabs to Global foundries was the biggest mistake of them all.
We are in the post PC world. If Tablets are computers: 2012 20% of PCs will use ARM. This is many lost CPU sales for AMD/Intel.
I predict that AMD will be gone within 3 years. Maybe someone buys them? After the settlement with Intel, AMD now can transfer its X86 license to the next buyer. (pending Intels approval)
Maybe Google could buy AMD and build complete computers ?
I see two things that might happen to AMD 1) They will transform in to a GPU manufacturer completely (and of course they will make those silly APUs) 2) If that damn x86 license is transferable, they could merge with NVidia. Neither of these two companies looks to hot these days, so they might as well work together.
We may be in the "post PC" era, but don't count x86 out. Recent studies indicate there's a corollary to Moore's law that applies to compute power per watt; the study goes back to 1961. This suggests that x86 is only a few years away from running on mobile devices, which is what MS and Intel are betting on. And frankly, it makes sense. Ultimately, I don't want two different things (a mobile device and a PC), I want a PC in my pocket and one on my desk.
i agree with some that bulldozer is more like faildozer, but...
let's keep supporting amd so the one getting piledrive'd in the naughty place will not be you when intel has zero competition left because you did not want to spend a little more for a little less....and let's be honest, it IS just a little.
if enough ppl drop amd, in the end WE will be the one paying for amd's lack of support.
at least amd is trying.....the question is, what are YOU going to do to stop intel becoming your bunghole-piledriving overlord?
Supporting incompetence is like socialism (or even communism). Eventually those that are supported will sit around like dogs all day and do nothing but lick their hairy balls...
Apparently money won't motivate the Monkey Engineers at AMD, so maybe making fun of them will. I mean, where is their pride, right?
By the way, I've seen real socialism, so I have a clue what it is. And it is what I just described. I don't like Intel because they are not healthy for our economy, yet their only competition just pulled a gigantic fuck-up.
oooooo oooga boooga socialism is bad....it take away aaalll you money...it verrry baddd.....oooooogabooogaboooo!! LOL
have fun getting eaten alive by china after your capitalistic model became cancerous and will die from the inside out.
your country is bought and paid for and will be eaten alive by the "communistic" chinese who are in fact just the same as what the usa has become: a corporate dictatorship (not communism and certainly not socialism).
sorry, i didnt mean to scare you more than you obviously already are. i would send you some lube to easy the pain, but i'm all out ;)
Dingetje; China has serious issues when it comes to the welfare of their people. China only owns 10% of our debt, and that is thanks to China becoming capitalistic as a nation.
Wolfman; Bulldozer is a server procressor. The server market is where the money is especially with the cloud and enthusiast-class desktops becoming rare. Intel has 30X AMD's market capital... they can afford to target multiple markets. AMD can't.
Bulldozer is superior with integer processing in both performance-per-core and performance-per-watt. Of course; I do wonder why desktop applications even need floating point... (numbers < -2^63 or > 2^63)
Jeebus, that power consumption is going through the roof! Also there were some rumors that it would go up to 8Ghz, I wonder if would use a Kw by then...
I want to see how they compare to each other when overclocked to 4,5 or more or less. Also Anand, can you do a efficiency test? Various overclocking speeds and bench these while monitoring the power consumption. Might make an interesting article :)
Not really - even including AMD fanboys. AMD can't understand that to move forward you must abolish old stuff for good. Brand new and spanking Bulldozer has it roots in ancient K6. Do something new for crying out loud or get lost and stop wasting time. Don't release CPUs just for the sake of offering something. That is not the point of CPU market. Even Intel can shoot themselves in the foot with X79. Looks like it will be similar failure to FailDozer. Nobody will invest in entirely new platform for 10 maybe 15% performance boost over X58 which is the new 775 socket. Long live the S1366! Plenty of life and fuel left in Nehalems, plenty... If you wanted to buy Bulldozer then go and buy X58 platform. After nearly 4 years on the market it is [somewhat ;)] dirt cheap.
Anand one thing: I find it puzzling that you reckon that Bulldozer will do well in server environments. With that kind of performance/Watt and inefficient power management? No chance in hell. i7/Xenons will eat FailDozers for breakfast.
I'm not. I completely agree with everything that you've said.
And, if I might add: Dear AMD, and dear AMD engineers (and lazy fucks that you are), throwing more cache at an already inefficient architecture is not going to solve your problem. Add to that that you people (yes, you AMD people) are calling a 4 Core CPU an 8 Core because you've added another Integer Unit to each core. WTF?! That's almost like calling a quad core Intel 2600K and 8 Core CPU because it has Hyper Threading.
I have been an avid AMD supporter since 1996. I have spent many thousands of dollars on your CPUs and other hardware that you people make. I'm done. Not another penny! Ever!
"Brand new and spanking Bulldozer has it roots in ancient K6"
There is some K7 heritage left, but I can not see in any way how this CPU relates to the K6! The K6 had a very short pipeline, a unpipelined FPU for example.
As when it comes to the server market: AMD seems to have overclocked and cherry picked the 3.6 GHz FX-8100. For the desktop market, clockspeed rules, so AMD didn't care too much about power consumption.
For the server market, they can go with lower clocked 95 W TDP parts. These should have a much better performance/watt ratio. Also, the server market runs at 30-80% CPU load, the desktopmarket runs a few cores at 100%. So the powermanagement features will show better results in the server market.
The gaming software needs fast caches (latency!) as IPC is decent. The server software is more forgiving when it comes to cache latency as IPC is more determined by the number of memory accesses and thread synchronization. That is the reason why that L3 is so handy. I think you should wait to condemn bulldozer until it is has been benchmarked on our server benchmarking suite.
I am worried about the legacy HPC performance of this chip though.It will take some recompiling before the chip starts to shine in this market.
Had to get this far in the comment thread for sanity. Clearly, AMD (and one may disagree) has chosen to go for superior integer performance in a threaded architecture. D'oh! So what? It means they don't give a rat's rectum about gamers. They care a whole lot about application and database servers. They also accept the fact that single threaded is dying, so just kill it.
Their roadmap is aggressive but when is the last time AMD has come close to meeting their schedule? Not going to happen. But do hope that they do for consumers sake.
AMD really bent over and grabbed their ankles....I'm just wondering why it took so long to release douche-dozer...I was really hoping they would have a good part this time...Will Intel stand alone as the sole quality CPU maker?? Only time will tell, but it looks to be so....
I must say, I did expect this. That price drop wasn't exactly a giveaway, was it? Single threaded performance is generally poor and there really is something wrong with the caching. I simply refuse to believe a lack of BIOS optimisations is at fault for any of this... and blaming Windows 7 for not truly understanding Bulldozer's idiosyncracies? Come off it; Windows 8 won't even be around when Piledriver appears, and we'll have to wait to see the second generation of this particular microarchitecture performing more like it "should". Bringing back the FX moniker certainly attracted attention, however if by doing so they wanted to remind us of the fact that the FX-51 was a server CPU, they've succeeded, if only on that basis, as the FX was king of all and not just in select benchmarks as the P4 tended to be.
I can't wait for Johan's server review; I just want to see if this thing really does well in its natural habitat. It's got to have a success somewhere. Thankfully, I can see far more optimism in this area. Incidentally, I was expecting Bulldozer to be able to work on eight 128-bit FP instructions per clock as opposed to 6 with Thuban, so obviously I got my wires crossed on that one.
You can't argue that Bulldozer hasn't a lot of promise, but at the same time, you can't argue that AMD haven't been trying to perform damage limitation on an already faulty product.
Nobody, and I mean, nobody at all, expected Bulldozer to reach SB like performance, obvious nobody either saw sub Phenom II performance in certain applications, but almost everything promised has been delivered, at lower prices than Intel, the way AMD has always done it, and quoting the article: "In many ways, where Bulldozer is a clear win is where AMD has always done well in: heavily threaded applications. If you're predominantly running well threaded workloads, Bulldozer will typically give you performance somewhere around or above Intel's 2500K."
PS wolfman3k5, stop your Intel shilling, it almost look like if Intel was paying you by the hour.
I get $22.50 per hour from Intel plus tips. I also get a $50.00 bonus if I surpass 1000 comments / posts per day. Between 3:00AM and 7:00AM I get $25.85 per hour. I make good money writing nice things about Intel. What do you do?
What's surprising is that you apparently think that's "good money".
Guess what, you little dumbshit kid, profit savvy professionals will sill be running AMD. I couldn't care less about your shitty lightly threaded games and optimized synthetic benchmarks.
You need to bear in mind that a) AMD reintroduced the FX brand just for Zambezi, and b) JF-AMD actually started a thread entitled The Bulldozer Blog Is Live! on www.overclock.net. Regardless of whether John Freuhe is a server-focused guy or not, the point being is that he and AMD both targetted the client side in terms of i) overclockers and ii) gamers. I might be wrong, but that's how I see it. Yes, he didn't come out and say it directly that Zambezi would be a great gaming solution, but he DID say that IPC would be an improvement over their past products. Now that the reviews are out, he's nowhere to be seen, barring the odd login to do who-knows-what. Does overclock.net have any leaning towards the server market in any way?
If Zambezi's poor performance is partly down to using faulty ASUS boards/anything less than 1866MHz RAM/an L1 cache bug/some weird hardware combinations/WHATEVER, I'm sure we'll find out in time, but regardless, it's going to be harming non-gaming workloads as well, so it's important to people like you as well.
Just thought I'd say that I've been a bit harsh to JF there. Out of all the AMD people who could've come along to have a chat, he was definitely the bravest. It was on his free time, and he's probably getting copious amounts of hate messages just for being an AMD rep.
I guess the prices on 2600k won't be going down anytime soon. I had already built my complete system in my head. Then the reviews came..
I kind of figured that if AMD was firing people and resignations were being handed in before a major launch, it wasn't going to be good. Also, no early release of benchmarks. That in itself was suspect. If they really had such a great processor than why all the secrecy. I was hoping it was an Apple play. boy, was I wrong.
You guys buy the "faildozer" and help keep the prices of the 2600K low. I'll be looking for a 2600K....
I'm not an expert, but Bulldozer seems to be a server chip pressed into desktop service. Designed for highly threaded workloads many consumer tasks just aren't it's forte (and also designed to have even more cores than 8). While it isn't competitive in single thread performance, if you use highly threaded workloads enough and aren't afraid to O/C to boost the single core performance, Bulldozer can be the better chip. That is if the price is right. The 8120 might be an awesome value in this scenario. We'll have to wait for reviews to be sure.
One question, please. When you O/C'd the 8150, did you only use stock cooling? From the review it sounded like you did, but instead of saying so clearly, you said it wouldn't do 5GHz on "air" (I believe that was the statement? Feel free to flame me if I'm wrong. :D). So, to be clear, would it not do 5GHz on air with a top notch cooler, or did you only try the stock cooler?
I was wondering the same, the OC part of the review seemed rushed by, almost lazy, I hope Anand can correct this and clear the doubts, can one of this cpus be run @ 5ghz or not?
More of the reason to have considered in testing it with an aftermarket cooler, if it hits 5ghz only with AMD's sanctioned cooler (which given the insignificant difference between Corsair and Antec offerings wouldn't surprise me if it was just a rebrand) it can be a bit of a problem to those of us already using a similar water cooler.
Not only does CPU fail, it fails so hard it even struggles to compete with its aging predecessors. A new architecture AND a die shrink and it can barely hold its own.
Whats really sad is that AMD could have updated k10, and probably achieved the same (or likely better) results.
I'll probably end up buying one. I'm still on my socket-939 Opteron 165, and I can wait a little bit more. Since many of you seem to be wont to skip this one, I'll probably get it at a better price.
Also, since I don't give a flying fsck about Windows, I'll probably get a Bulldozer-aware CPU scheduler before you clowns do. :P
Seriously disapointed now. I'm glad they put more than 2 freaking SATA 6GB ports on the mobo, but that's a 200 dollar+ mobo so it doesn't really matter.
AMD's CPU performance is retarted. Honestly, all the hype, all the delays, this is a disaster. Good thing their GPU division is executing well or I'd be seriously worried about this company being around in 4 years.
Intel needs to stop jewing out on their mobo configurations. I need AT LEAST 4 SATA 6GBPS ports and I was like 12 USB 3.0 ports, but even with my gripes about them cheaping out on mobo's and switching sockets every year or two... at least their CPU's have gotten faster in the last 6 years. Beyond just expected incremental gains like AMD has made.... or this time around hasn't.
You could try to argue that describing the performance as retarded *might* be syntactically and grammatically correct, but clearly it's meant in the pejorative sense. You could have pointed out the ironic misspelling. But you didn't react at all.
What do you expect? The majority of these comments come from idiot children that only care about games and completely misunderstand the point of this architecture.
ya'll need to lighten up and take life less seriously. If you're wasting your time being politically correct then you're wasting your time... nuff said.
I disagree, jewing is totally not offensive. My friends call me jewish all the time and I'm not really any religion AND I'm mostly German. It's just joke dude, lighten up.
I like to spell retarted that way better, that's how I say it. I also spell theater theatre and ever since that movie Inglorious Baterds and I spell bastard basterd. I just like it better.
what's disgusting is that you're so racist it offends you. If you truly aren't racist then it doesn't matter. It's just another way of saying being cheap. And as long as you're not a pent up old politically correct fogey it's humorous.
so disappointing to read this. What on earth were they doing all this time?? AMD's NEW cpu can't even outperform its OLD CPU? well atleast i can stick with my PhenomII X6 till Ivy Bridge comes out & thank goodness i didnt buy a pricey AM3+ before reading reviews.:p So sad to see AMD has come to this.....
I think when Anand publishes benchmarks with a couple of Bulldozers working together in a dual or quad-socket board (Opteron), THEN we will see why AMD designed it the way they did. If the FX achieves parity and sometimes superiority in heavily multithreaded apps vs Sandy Bridge in a single socket, then imagine how two or four of these working together will do in server applications vs Sandy Bridge Xeon. I'll bet we see superiority in most server disciplines.
I don't think this silicon was designed to go after Intel desktop processors, but to perform directly with dual and quad socket Xeon.
Its intended to be an Opteron right now, and as an afterthought-to be sold as an FX desktop single socket part, to bridge the gap between A-series and Opteron.
Indeed. The market for high-end desktop parts is very small, with low margins, and shrinking! The mobile market is growing, so AMD A6 en A8 CPUs make a lot more sense.
The server market keeps growing, and the profit margins are excellent because a large percentage of the market wants high end parts (compare that to the desktop market, where almost every one wants the midrange and budgets). the Zip and crypting benchmarks show that Bulldozer is definitely not a complete failure. We'll see :-)
I'm sorry but you are wrong sir and here is why: They are marketing this chip at the CONSUMER and NOT the server, which makes it a total Faildozer.
If they would have kept P2 for the consumer and kept BD for the Opteron then you sir would have been 100% correct, but by killing their P2 they have just admitted they are out of the desktop CPU business and for a company that small that is a seriously DUMB move. Their Athlon and P2 have been the "go to" chip for many of us system builders because it gave "good enough" performance in the apps that people use, but Faildozer is a hot pig of a chip that is worse for consumer loads in every. single. way. over the P2.
I'm just glad i bought an X6 when i did, but when i can no longer get the P2 and Athlon II for new builds i'll be switching to intel, the BD simply is worthless for the consumer market and NEVER should have been marketed to it in the first place! so please get off your high horse and admit the truth, the BD chip should have never been sold for anything but servers.
You introduce the fact that AMD lengthened the pipeline transitioning to Bulldozed without explicitly mentioning the pipeline length. How many stages exactly is Bulldozer's pipeline?
Well there clearly seems to be something wrong with the usage of the modules in combination with the way to high latency on any cache and memory. single threaded performance is hit by that and so does lack any gaming performance.
secondly during OC just like previous gen, do something more with NB oc in stead of just upping the GHZ, there is more to an architecture then just the ghz....
I did not expect Bulldozer to rock the CPU world, but.... A Bulldozer has a 256 bit shared FPU which is capable of calculating 2x128 bit FP instructions at the same time vs 128 bit FPU per core in Phenom II
Bulldozer 8150 should be able to process 4x256 bit FP instructions or 8x128 bit FP instructions at a time, while Phenom II 1100T should be able to process 6x128 bit FP instructions at a time.
The short calculation above shows Bulldozer should have an advantage over Phenom II in FPU heavy computations.
The test don't lie and the two processors perform the same, but there should have been a difference, in theory.
This is an utter failure from Amd. For a personal workstation or home computer there is simply no reason to choose Amd over Intel. There are even cases when the older Amd cpu is better, which to me looks insane.
So we get nearly no new price pressure on the market from this ether. It's just like a silent release, the market wont notice and the customers wont notice that there is a new Amd cpu on the market because the cpu has nothing of interest it can offer. This is really disappointing.
Worse performance, crap power consumption. Only way to save this is a big price cut say to $150-160.
The AMD approved water cooling system looks to be a gimmick, and I say this as someone who prefers water over air cooling. I do not see the point of CPU only watercooling - if that is all you want then air cooling is cheaper, almost as good and a heck of lot easier to install. IMO CPU+GPU is the minimum if you want to watercool, unless you are into overclocking when a single rad is too small
I guess AMD are rapidly becoming a niche player because as bad as BD is, intel's atom is worse compared to the AMD equivalent
A very high price cut could save the product like you say but that might mean Amd will lose cash, thou it's not like they wont lose cash already being so far behind. I really do not see a place for this new cpu.
This is bad thou, because we need competition on the market because otherwise Intel will only have to take customers willingness to pay a certain amount of cash for a cpu into account, there is no competition.
I like water cooling thou, but that is because I like to overclock some. Water cooling systems can also be more quite but not necessarily. Most people I do not believe will gain on a water cooling system, that would only increase the price of the product.
I swear, you fucking kids are ridiculously stupid. Those of us that actually use CPU's to their full potential understand that this is far from a 'failure'. You gamer children haven't got a clue what 'future proofed' means.
Hey retard, there is no smooth way to utilize this cpu. Trty and realize that.
There are few cases where strong cpu's are needed, servers, graphics, data processing and gaming. Most readers on this forum is probably gamers and thus writes from that perspective. Is that so hard to understand.
From my point of view as a solution developer of funds and insurance systems this cpu is not of interest because the alternatives for the servers are better and for client computers the need of a strong cpu is not of interest at all usually and thus this cpu is not of interest there ether.
We don't know the server performance yet. There's also some frantic investigations going on into why the client benchmarks were so poor, so watch this space.
I always observe. I hope AMD makes a comeback because the market needs competition to work in favor of the customer.
Analytics believes Amd is becoming irrelevant on the processor market, which is really bad. They believe AMDs share is worth 4 dollar each now and not five dollar which they believed before.
I do not know if new drivers would help Amd out or not but 10% more or less does not cut it. I really hope Amd makes a comeback with a new cpu in a couple of years but only time will tell. How they could release this one is not understandable, they had to known the values.. something is not always better than nothing..
Because there is no longer anything to wait for. Bulldozer is an absolute failure. Oh dear. That's not helpful at all, because a competitive AMD CPU is just what the market needs right now.
The worse part about this review is the MB diagram. Earlier this year, the diagrams for the SB950 showed a 4GB/s Alink III Expression connection. The diagram in this review shows 2GB/s -- the same as the previous SB850 and competing Intel chipsets.
Who cares about CPU speed ... they're all close enough. We need I/O speed way more. I have a new server with5 x 6Gbps Sandforce SSDs in a RAID0 array and we hit the 2GB/s limit with just 2 drives. (2GB/s bidirectional is 1GB/s each way.)
And no, I have zero confidence that any 4-port/8-port RAID controller has enough power either. Maybe the most expensive 24-port ones can do it but I am not going to continually buy and return $2000 controllers until I find one that is beefy enough. Especially since the majority of the RAID functionality is completely wasted with SSDs.
OT, but you may want to try a SAS2 HBA based on the LSI 2008 chip - They're generally around $150 and I promise they can do:
A) Far higher performance than any crappy motherboard controller. B) Way more than 2GB/s full duplex.
If you're worried about storage performance while using an on-board disk controller, you're just going about it all wrong, especially if you think you're going to gain much using their crappy software raid.
This monster, in this particular test, adds over 144W going from idle to full load! (For comparison the i2600K adds mere 78W and performs a notch better.) Assuming it already wastes ~10W at idle and even by factoring in the increase in power usage coming from the chipset/memory, I still very much doubt it that this...hm..thing...can fit into the stated 125W TDP.
Good thing Anand didn't do the performance per watt maths...it would've painted a devastating picture.
Intel made most of their money with Server CPU. And If BD perform well there, then i suppose AMD still have some breathing space.
Otherwise with Ivy Bridge, AMD doesn't have a single chance of surviving in the near future. Gfx are much better with Ivy, and with its video decoding engine that seems to be much better then even Nvidia or AMD.
"Sandy Bridge, which on Intel's 32nm HKMG process is only 1.16B transistors with a die size of 216mm2" But in the table below it you say it's 995M transistors.
In the AMD table, you mention '3MB' as NB Clock for the AMD Phenom II X6 1100T.
Technically, both are correct. One of us wrote the text and the other wrote the diagram, and each of us picked a value; they just didn't match. Whoops. Fixed
good review La Shimpi. For OC...I think, it is no problem hit 4.8 Ghz stable, but you need better aircooler (some Noctua or so). I hit with D14 4840 MHz stable. Boot up to 5050 MHz. 5250 MHz validation.
Apparently AMD is busy with some sort of bulldozer optimization patch for windows 7. Anand, will you guys be updating your benchmarks once this comes out?
Sigh, I was really hoping for better competition to drive the prices down.
Going from 85W to 229W in power consumption is 144W. If we assume a 80% efficient PSU that's 115W + 10W idle so it's completely maxing the TDP. Look at it, it's almost 100W over the 2500K which is what it's most competitive with. I can't see that being popular at home or in the server space, expensive and a huge problem to cool sufficiently. And that die size, it must cost tons to produce so the margins must be slim and none so crap for customers, crap for AMD. I didn't dare get my hopes up very high but this I would call a total disaster. I honestly did not think it could get this bad.
BD seems to be very forward looking, and I think it will be worth it. Look at 7-zip vs Winzip perf (can't remember who did the comp), but BD was worst in WinZip and fastest in 7-zip, like, slower than propus and faster than sandy bridge. Intelligently threaded games really rock with BD but older or less technical games tank with BD.
I agree with others that this is more of a 4 core 8 thread CPU, but honestly, counting cores is dumb. Threads are what matter and this can run 8 side by side without adding latency, unlike intel's HT.
And really, clockspeed was 'abandoned' because it wasn't really feasible to pursue it any higher, but properly executed, a high clock solution can allow for deeper pipes without sacrificing latency too much, achieving more per clock at a higher clock rate. And from what I've seen on the powerconsumption side of things, 4.6 doesn't draw as much as 3.7 on my PhII.
Actually, performance and watt is what matters and this cpu fails horribly in so many areas. This product would have been better of unreleased. You'd have to be somewhat insane to purchase it.
As both AMD and Intel now use dedicated hardware for AES I feel simply testing AES performance isn't enough. A benchmark of the AES+Twofish+Serpent or atleast AES+Serpent would serve as a more telling benchmark at this point. Don't get me wrong I love that you guys even run a benchmark related to Encryption but it needs to be updated.
About BD I'm also extremely bummed out that it didn't turn out better than this. Ofc there might be room for improvements with patches/cpu-driver for windows7 etc but considering the TDP, transistor-count and everything else this is a huge loss for AMD.
I'm still interested in seeing how the Opteron versions will perform in specific tasks as the architecture itself seems really interesting. Someone obviously spent a lot of time thinking this design through and I'd like to believe there's at least one particular workload where BD can actually flex it's muscles for real.
Since Bulldozer wasn't created with 3D shooters in mind, it would have been nice to see some financial/engineering/scientific benchmarks instead. Anandtech used to differentiate itself from the kiddie sites by providing that sort of analysis. I guess things change, like my RSS subscription to Anandtech articles will.
That said, the power consumption numbers pretty much say everything I need to know about the CPU series- the constraint on almost all HPC is power, not SPECint or peak flops.
Unfortunately a huge majority of the enthusiast market are gamers. If you truly want productivity benchmarks then wait for server chips. FX CPUs aren't marketed as such, but does perform like one.
With that said, the FX lineup is a decent multi-threading powerhouse, and not flop in that respect.
I'm not AMD fan boy, but the reason AMD gave is pretty reasonable. Thread locality is an important factor in bulldozer architecture, primarily because the memory latency on the cache level is pretty high. If the OS can't schedule the thread properly to the correct core, then there will be a lot of inter-core data movement and probably problem like false sharing can be more apparent.
While on one hand as a PC user and builder...and really wanting to build a BD based mindblower, I'm a little disappointed...OK, more than a little...by these results. On the other hand, as an MBA and investor in AMD, I see the big picture and have to reluctantly agree...and hopefully profit.
If you have constrained and finite capabilities in both design and manufacture (GloFo needs its butt kicked), you maximize along a marginal ROI track and right now that would be server chips to support the growing and lucrative cloud, data warehouse, HPC, and corporate servers and the growing fusion space integrating modest x86 with robust video on low wattage single chips, you end up with exactly what we have here. BD (a server chip rebranded) and Llano with plans to improve both with descendents.
In highway terms it would be akin to building semis and commuter cars. This is the high performance forum and while the Ferrari Enzos are cool and badass, it's hard to fault AMD for the approach. After all, when you're on the road today, you'll see a bunch of semis and commuter cars...its economics. Performance sells magazines, utility sells products.
BD must be a killer server beast because Cray (you know Cray, right?) just got a $97M contract from Oak Ridge NL about a month or so after taking possession of the first box of BD based server chips. I think Cray knows a thing or two about making computers haul butt.
Now we'll see if any of that translates into the client space...
I'll agree with this. We have a ton of servers -- both Intel and AMD. More integer cores are better. FPU? Games? 3D? Media encoding? Who cares. Hyper-threading does nothing when you peg all cores with VMs running at full blast. For example, we have 1 configuration where we run 4 VMs on a Phenom II x4 3ghz and it performs roughly the same as our 4-core i7 2.8ghz. If we add a 5th VM, both slow down equally showing that there are simply no free resources in the CPU pipeline for hyper-threading to steal.
So where the bulldozer platform is extra good is for cheap / disposable / uniform VM hosts running Linux. Instead of 1 mega expensive quad xeon costing $100K, you have 10 x 1U Bulldozers that can handle 8VMs each at full utilization without speed degradation for $10K. In addition, you'd probably run something like Centos (or RHEL) the default packages are not compiled with Intels uber compiler so many of the +25% you see in benchmarks here don't exist at all in the Linux world.
The most disappointing part though (which I mentioned previously) is the lack of speed improvement for the chipset. The first bottleneck for adding more VMs is CPU core but the 2nd is disk bandwidth. If you have disk intensive VMs, you need a separate hard drive for each VM to avoid HD seek latency killing performance. But putting 8 HDs in a 1U is impossible so you need 2U/4U servers taking up too much rackspace.
The answer of course is a fast SSD ... 500MB/s with 0 latency can be split off to separate VMs with a linear degradation versus exponential for HDs. But the SB950 chipset at 2GB/s bidirectional can only handle 2 fast SSDs. So 1000 MB/s divided by 8 VMs reading at full blast is 125 MB/s per VM -- which is regular SATA3 HD speed. Double that to 4 GB/s and you can put easily put 4 x 2.5" SATA3 SSDs in a 1U delivering 250 MB/s to each VM. Now we're back to at least 2nd generation SSD performance.
(Note, all the Intel chipsets also max out at 2GB/s bidirectional and stuffing a super expensive raid controller in a 1U is not cost effective.)
Great analysis...I'm not a server guy and can hardly keep up with the average 15 year old on desktop jargon and theory, but it seems that the bigger cache would mitigate the roundtrips to disk in the conditions you describe. I guess that's why they left that fat L3 cache on the die...assuming Interlagos and Zambezi are really closer than cousins...more like siblings.
Great financial case...that I get. I heard a joke the other day that went something like "Whenever they say it's not about the money, it's about the money". It's always about the money... :)
This is reminiscent of the Phenom I launch without the TLB bug. You have a chip that barely outperforms its predecessor and at times performs a little worse. AMD might be able to make a Phenom II like product out of Bulldozer but I I think it's too late. They needed to start out well out of the gate with this one.
Right now I'm on a Phenom II and will be upgrading to Sandy Bridge soon. I'm done with AMD on the desktop front; a platform which is probably a dead one in the next ten to twenty years anyway. AMD should just stick to the server market and mobile platforms for CPUs as that's where they have a dog in the hunt.
I understand why AMD execs resigned in the past 2 years... can you imagine what it musta looked like then? "Nah, we've actually gotten slower per thread, and will need 4ghz+ to compete now..."
What was the cpu usage like, i have a sinking feeling that cpu usage was low for most of the Review. I heard rumors that Amd are working on a patch, it would make sense because Zambezi losses to the atlon x4 sometimes, and that doesn't make any sense to me at all. Their has to be a performance loss on the cpu, whether it is based on the cpu or maybe it's design is hard for windows to handle.this processor can't be this slow.
Look back when the first Phenom hit the street, I think AMD will right the ship and update over time and fix any problems. The gaming performance really looks sad though.
BD will have to drop their prices pretty hard to compete with these benchmarks. They are designed for an even smaller niche than gamers: People who use heavily threaded applications all day.
I also don't see why anyone would ever put these procs into a server, with over 100 watts extra of heat running through your system compared to the i5 and i7. Interlagos may be more efficient but the architecture already is very power hungry compared to intel's offering.
Really great way to end the review though Anand, AMD must return to its glory days so Intel doesn't continue to jack consumers. Hell after these benchmarks I could see intel INCREASING their prices instead of decreasing them.
Hmm... It seems that BD is leaking a lot of energy when running high freguency! But I am guite sure, that is very good in low 95w usage, with lower freguency. So I think that BD is actually really good and low energy CPU for server use, but the desk top usege is very problematic indeed.
Seems to be a lot like Phenom release. A lot of current leakage and you got either good power and weak porformance or a little better performance and really bad power consumption... Next BD upgrade can remedy a lot of this, but it can not make miracles...
I am guite sure, that BD will become reasonable CPU with upgrades and tinkering, but is it enough? The 32nm production technology will get better in time, so the power usage will get better, so they can upgrade freguencies. The problem with single threath speed is the main problem... If, bye some divine intervertion, programers really learn to use multible cores and streams, the future is bright... But most propably the golden amount of cores is 2-4 to far distant future... (not counting some speacial programs...) And that is bad. It would reguire a lot of re-engineering the BD to make it better in single stream aplications and that may be too expensive at this moment. There is some real potential in BD, but it would reguire too much from computer program side to harnes that power, when Intel has so huge lead in single core speed... Same reason Intel burried their "multicore" GPU project some time ago...
We can only hope that fusion and GPU department keeps AMD floating long enough... Or we will have to face the long dark of Intel monopoly... It would be the worst case scenario.
Anand, your compilation benchmark tests only single threaded improvements. Would it be possible to do multithreaded benchmark? Just do compilation on Linux with MAKEOPTS=-j9.
Also, most of your benchmarks only test floating point performance. It was obvious to me that Bulldozer would be bad at that and I am not surprised. Is it possible to test parallel integer heavy workloads like a LAMP server? Compilation is another one, but I mentioned that above.
Here is to hoping, that reviews to follow will offer at least some perspective on why single thread performance is still important. Instead just harping on it (as did reviews before it).
Everybody can run a benchmark, but it's the broad context and perspective that I came to appreciate to read about in Anandtech reviews, beyond "I suspect this architecture will do quite well in the server space". Mind you I'm not referring to the big AMD vs. INTEL broad strokes, but the nitty-gritty.
Agreed. An enhanced 8 core Phenom II X8 on 32nm process would have used ~1.2B transistors on ~244mm^2 die (smaller than Deneb & about the size of Gulftown) as opposed to the monstrous ~2B and 315mm^2 of a Bulldozer 8 core. Given the same clock speed, my estimates have it outperforming the i7-2600 on most multi-threaded applications. And, with a few tweaks for more aggressive turbo under single core workloads, it would have at least been somewhat competitive in games.
Bulldozer is a BIG disappointment! It would need at least another 4 cores (2 modules) tacked on to be worth while for multi-threaded applications. AMD has stated it is committed to providing as many cores as Intel has threads (Gulftown has 12 threads so 12 core Bulldozer?), so maybe this will happen. Still... nothing can help its abysmal single core performance. If they can do a 12 core Bulldozer for less than $300, I might get one for a work machine but stick with an Intel chip for my gaming rig.
Companies this incompetent should not be allowed to survive. They bought a GPU company 5 years ago, and have done absolutely nothing to create any type of fusion between the cpu and gpu. You still have a huge multi-layer, multi-company software bloat separating the two pieces of hardware. They have done nothing to address this, and it is clear they never will. Which makes the whole concept a failure. It was a total waste of money.
Consider what AMD is and what AMD isn't and where computing is headed and this chip is really beginning to make sense. While these benches seem frustrating to those of us on a desktop today I think a slightly deeper dive shows that there is a whole world of hope here...with these chips, not something later.
I dug into the deal with Cray and Oak Ridge, and Cray is selling ORNL massively powerful computers (think petaflops) using Bulldozer CPUs controlling Nvidia Tesla GPUs which perform the bulk of the processing. The GPUs do vastly more and faster FPU calculations and the CPU is vastly better at dishing out the grunt work and processing the results for use by humans or software or other hardware. This is the future of High Performance Computing, today, but on a government scale. OK, so what? I'm a client user.
Here's what: AMD is actually best at making GPUs...no question. They have been in the GPGPU space as long as Nvidia...except the AMD engineers can collaborate on both CPU and GPU projects simultaneously without a bunch of awkward NDAs and antitrust BS getting in the way. That means that while they obviously can turn humble server chips into supercomputers by harnessing the many cores on a graphics card, how much more than we've seen is possible on our lowly desktops when this rebranded server chip enslaves the Ferraris on the PCI bus next door...the GPUs.
I get it...it makes perfect sense now. Don't waste real estate on FPU dies when the one's next door are hundreds or thousands of times better and faster too. This is not the beginning of the end of AMD, but the end of the beginning (to shamlessely quote Churchill). Now all that cryptic talk about a supercomputer in your tablet makes sense...think Llano with a so-so CPU and a big GPU on the same die with some code tweaks to schedule the GPU as a massive FPU and the picture starts taking shape.
Now imagine a full blown server chip (BD) harnessing full blown GPUs...Radeon 6XXX or 7XXX and we are talking about performance improvements in the orders of magnitude, not percentage points. Is AMD crazy? I'm thinking crazy like a fox.
Oh..as a disclaimer, while I'm long AMD...I'm just an enthusiast like the rest of you and not a shill...I want both companies to make fast chips that I can use to do Monte Carlos and linear regressions...it just looks like AMD seems to have figured out how to play the hand they're holding for change...here's to the future for us all.
This is a sad day. The Bulldozer design is basically AMD repeating Intel's mistakes. They were able to zip past Intel back then because Intel focused on clockspeed, and they focused on IPC. Now it seems to be the other way around.
"Heavily threaded workloads obviously do well on the FX series parts, here in our 7-zip test the FX-8150 is actually faster than Intel's fastest Sandy Bridge"
We work on a product which is heavily multi-threaded (around 90 threads on one component). Wondering if FX-8150 would be a better bet to lower hardware costs.
very disappointing performance. I'm in 100% agreement with everyone displeased about the single threaded performance and find it laughable that they'd think performance would be gimped because Win7 doesn't use the cores properly.
I'm actually a fan of the A8-Llano series of processors and wanted to see what the hybrids of Bulldozer with an integrated 6700 or 6800 series AMD Graphics would be like. Think about it: it could have been a media center from hell's dream chip. "Good Enough" graphics for gaming, HD acceleration, and plenty of PCI-E slots on a mATX to pop in as many satellite, OTA, and/or digicable tuners to make you sick. And have the main heat generators all cooled by one trick water/copper air cooler. That would have been pretty sweet.
But they're going to have alot of tweaks to do to make this competitive for anyone, myself included, to want to commit to it. I've been an AMD user ever since the K6-II days when they were at intel's heels and up to the 939 days. But then I finally went Intel Wolfdale and currently have a 2600K. They have a good thing with the purchased ATI offerings... but in order to sell fusion, the other piece they're "fusing" into the chip needs to be enticing.
been reading the scheduling in both windows7 and linux kernal is a mess for bulldozer and when theres a patch performance will improve. any truth to this? its what i`ve read over on XS
Anand, you said it yourself. AMD hasn't been competitive since 2006 and Core 2. Has there been a real choice in the CPU space for the last 5 years? Do we even need choice? The fact of the matter is Intel is only competing with *THEMSELVES* in the CPU space and this has been the case for the last 5 years. They need to keep dropping prices and releasing faster processors because that's the only way they are going to sell more CPUs, irrelevant of whatever drek AMD is releasing in their wake.
I really wish tech writers and pseudo-economists would stop repeating the lie of a meme that "we need competition" in order for innovation to continue, when that's simply not the case.
Your comments are a gross oversimplification. There are many segments of the x86 market and AMD has been competitive in some of them over the past 5 years. Just because they haven't been competitive in your favorite segment (which I am guessing is high end) doesn't mean that they haven't been competitive at all.
I bought an Atom board and a Zacate board and the Zacate was clearly and thoroughly superior. So AMD wins at the very low end.
At the low midrange I believe that AMD has always, except until maybe very recently, been the better choice; of course AMD has achieved this by basically making no money on their parts which obviously is not sustainable.
At the mid-midrange AMD has been sometimes competitive and sometimes not over the past 5 years. I personally bought a 6 core Phenom II last year around this time because the workload I care about - massively parallel software compiles - was heads and shoulders better on AMD than similarly priced Intel at the time.
Once you get into the high midrange and all segments of the high end, then I finally agree with your comments - AMD has not been competitive there for 5 years.
On the laptop front, AMD has mostly been uncompetitive in the time frame in question except where they have reduced price to an unsustainable (for them) level and stolen the occasional win. The only win for AMD in that space is, once again, their Atom equivalents which win in netbooks.
Your post is an oversimplification. However, if AMD doesn't pull a rabbit out of their hat, then for the next 5 years your oversimplification will end up being true.
Competition driving innovation is not a lie, and your appeal to people to stop repeating this truth will go unheeded so don't waste your breath.
No, its not really a gross oversimplication, its just an accurate generalization born out repeatedly over the last 5 years.
Intel has AMD beat in every single price-performance segment in the x86 market except in the rare cases where mGPUs come into play, like your Zacate example. Which is ironic in and of itself, given some of the asinine remarks AMD leadership has made in the past with regard to ULP products.
Just look at this low to mid-range line-up, the pricing segments have been steady for the last 4-5 years during Intel's dominance, the ONLY things that change are the SKU names and clockspeeds.
This all occurs independent of any meaningful competition from AMD, as every single Intel option offers better performance for the same or slightly more money.
Even in multi-threaded apps Intel often achieves performance parity with FEWER cores and less TDP. Its embarassing really how Intel can even compete with AMD's 6 core processors with their *2* core processors even in the threaded apps you're referring to.
Tick Tock doesn't stop when AMD stumbles, nor does Intel's commitment to process fabrication leadership. Why? Because their investors and consumers demand innovation INDEPENDENT of what AMD does or does not do to compete. Competition does not drive innovation in the x86 desktop market and has not for the last 5 years, the bottom line does.
AMD is the only viable alternative to Intel. If AMD should not get out of this slump, we could well see AMD leave the CPU market which would leave one CPU manufacturer, and we all know how restrained Intel is with its pricing...
I really want to see some server benchmarks: LAMP & IIS stack, DBMSs, HPC benchmarks.
I do not think the real potential is being explored, (for FPU intensive code especially) a recompile is needed with a compiler that knows what a BD is.
Sometimes I think it's a pity a company like IBM doesn't make a consumer chip, ie. a rival to Intel which has enough cash and expertise to do the job properly, come up with something that can be a good fit for both the consumer & server spaces. IBM certainly has plenty of CPU design experience for servers, eg. the 4.25GHz 8-core Power7.
This is interesting, even though the bulldozer release was today with what is considered bad results, their stock price is up almost 4%? Is it possible the investors know something the customers don't?
The financial press reported that AMD was the second most 'upgraded' stock in the S&P 500 in the past week...with 3 upgrades from analysts...it's contextual though since they are upgrading it from poor ratings like sell, hold, or market underperform.
The bigger reason is that when they reported the GloFo headaches as they are required to do, the investors brutalized the stock and drove it below $5...once it went below $5 a lot of pension funds, etc... have rules against holding 'penny stocks' and then they had to sell driving it even lower, eventually to $4.31 when it bottomed out.
At that point, the stock purely on a P/E basis is cheap even with the headaches and folks like me step in figuring that even if they get a little traction there's a huge upside...for all the bitching, manufacturing problems and glitches, etc... almost always get worked out. Most processes have positive dynamic stability, meaning that things tend to return to a mean steady state, so if it's well below the mean for temporary reasons, it'll go back up and vice versa.
The buyers are just putting the stock closer to where it needs to be based on the ground truth...that said, it's still cheap as hell and could triple just to get to the S&P average P/E. This is one of those times when you'll look back and say, I could've had AMD below $5.
The linux scheduler is more advance than the one in windows, so if windows can't deal with the cpu, let's see if it works the way it should under the linux environment. The reviews on the net are way to inconsistent on windows, legit reviews have the bulldozer cpu, neck to neck with the i7 2600, check it out. http://www.legitreviews.com/article/1741/1/
The article states that more clock speed is needed to negate the architectural limits on single threaded performance. Then in the very brief OCing results you indicate that the processor will easily clock from 3.6 default to 4.6. But there are no bench marks of how that impacts the performance in any of the tests.
I wan to know if I can buy a processor for $250 that will OC to 4.6 on stock cooling which makes it competitive. Both in single thread applications, and stomps other CPUs out at muli-threaded tasks at that speed.
Lets is the OC results and temps compared to the regular benches please.
"Since transcoding is done by the CPU this is a pure CPU benchmark."
This is true only if you're using non-NVIDIA GPU's. For people using NVIDIA GPU's not more than a couple of years old, a lot of the texture transcoding is offloaded from the CPU and done on the GPU using CUDA, and this gives a pretty good speedup.
They used an ASRock board, however their gaming tests seemed to push towards a GPU limit (using a 6950), rather than a CPU limit. However, one might argue that you would really be pairing a fast card with such a processor and using very high settings.
There's something very fishy going on here as well. Take a look at these two pages:
The 580 and CVF-equipped setup is being trounced by the 6950 and Extreme4 setup. What gives? As most of the reviews I've seen have been based upon on the CVF motherboard, is it entirely possible that *gasp* ASUS made a bad board/BIOS, or is there something rather odd about HH's setup?
..but because Intel doesn't have to try hard to compete, here we were sitting as consumers waiting for a proper response from AMD so that Intel would be on their toes to unleash more potential in their chips or lower prices, but this is sort of sad...
I recently bought a Phenom II X6 1090T to upgrade an X3 710. Looks like a better deal now. BD's power draw in particular is disappointing. No doubt Intel is what I'd recommend to others. I've have been using AMD CPU's for several years now (Athlon XP 2100+ was my first), and I still like AMD, but I'm disappointed by BD, especially after the long wait.
It seems like performance on bulldozer is highly application dependent, better at data-parallel and worse (even than Phenom) on irregular and sequential applications.
I'll probably skip this one.
I don't mind this tradeoff, but the problem is that AMD already has a good data-parallel architecture (their GPU). I'n my opinion they are moving their CPU in the wrong direction. No one wants an x86 throughput processor. They shouldn't be moving the CPU towards the GPU architecture.
AMD: Don't pay the OOO/complex decoder penalty for all of your cores. If your app is running on multiple cores, it is obviously parallel. Don't add hardware that wastes power trying to rediscover it. Then, throw all your power budget at a single or a few x86 cores. Beat Intel at single threaded perf and then use your GPU for gaming, and throughput processing (video, encryption, etc).
I'm not a fan of Intel, but they got this right. If they get over their x86 obsession and get their data-parallel act together they are going to completely dominate the desktop and high end.
Care to share which tests are 64 bit? Each bench program used must specify if its 32 or 64 bit. Why do all review sites forget to includet this critical info? From the limited results I can find on the net AMD see's a large performance increase running 64 bit code over 32 bit code while Intel see's little if any increase.
I've got an Asus board that promises to support BD, and I've holding off upgrading my unlocked/overclocked 550BE for literally months, and for this? I might as well just get a Phenom II quad or 6-core.
I've said all along that AMD needs to address their clock versus instruction efficiency to be competitive. To do that they need to redesign their cores and stop dragging along their old K8 cores.
So here we are with Bulldozer, Wider front end, TurboCore now works, floating point decoupled, 8 (int) cores, and... still flogging the same instruction efficiency as the old K8 cores (at least, the integer portion of them).
Oh, yeah, I'm sure at the right price point some server farms will be happy with them, and priced low enough, they can hold on to the value portion of the marketplace. To do both they'll have to compete aggressively on price, and be prepared to lose money, both of which they seem to be good at.
Like Anand said, we need to see someone actually compete with Intel, but it appears that AMD has lost the ability to invent new processor cores, it can only manipulate existing designs. Instead of upgrading the CPU, it looks like I'll go for a full Intel upgrade, unless I can find an 1100T real cheap. Hmm, that's probably a real possibility. I'm sure a lot of AMD fans are going to be trading them in now that they see what their AM3 upgrade path is(n't).
I think that you should clarify the difference between what you call "server" workloads (i.e. OLTP/virtualization vs. HPC).
I suspect that with one shared FP between two ALUs; HPC performance is going to suffer.
The somewhat-computationally intensive benchmarks that you've shown today already gives an early indication that the Bulldozer-based Opterons are going to suffer greatly with HPC benchmarks.
On that note: I would like to see you guys run the standard NCAC LS-DYNA benchmarks or the Fluent benchmarks if you can get them. They'll give me a really good idea as to how server processors perform in HPC applications (besides the ones that you guys have been running already). Johan has my email, so feel free to ask if you've got questions about them.
Bulldozer reminds me of Sun's (Oracle) Niagara architecture. It seems AMD aimed the server and professional market. It makes business sense. The profit margins there are net 50-60% (this is AFTER the marketing, support, etc overhead costs) and along with the high performance work stations is the only growing market as of now. Hence, the stock market lifted the stocks of AMD. Gaming and enthusiast market is around 0.7% of CPU revenue - yep, that is, I work with this kind of statistics data, guys.
This is a promising architecture (despite the fact that is not good for home enthusiasts). AMD should focus on providing more I/O lanes through the CPU - aka PCI lanes on cheaper boards without requirement of additional chips. It will allow placing more GPUs using overall cheaper infrastructure - exactly the way HPC and server market is evolving. Then, they should really get a good software team and make/support/promote SDK for general GPU computing in line of what NVidia did with CUDA.
For anything mainstream / aka Best Buy, Walmart, etc./ Llano is good enough.
As I said, this Bulldozer chip apparently is not good for enthusiasts, and Anandtech is an enthusiast site, but unfortunately this is just a very small niche market. People should not bash a product, because it doesn't fit only their needs. It is OK for the vast market.
Thanks for the VERY interesting stats. I had a hunch it made good sense, but since I don't work with these data it was just a hunch in the end. Now it's better...maybe a hunch plus. We should feel lucky that they even pay any attention to this segment...I suspect they do b/c a lot of decision influencers are also computer racers at home.
One thing that I will also say/add is that while people are perhaps grossly disappointed with the results; think about it this way:
What you've really got is a quad-core (I don't count their ALUs as cores, just FPUs) processor doing a 6-core job.
So if they went to a 6-module chip, the benefits can actually be substantial.
And on the 8-module server processor, it can be bigger even still.
And yes, this is very much like the UltraSPARC T-series (which, originally was designed by UltraDense as a network switching chip), but even they eventually added a FPU per core, rather than just one FPU per chip.
The downside to the 8-module chip is a) it's going to be massive, and b) it won't be at the clock speeds it NEEDs to be to compete.
I quickly ran the Rage vt_benchmark and got ~.64 @ 1thread and .25 for 2-6 threads which is what your Intel #s line up with - BUT I'm running a Q6600, 4gb, and a GTS 250... shouldn't I see much worse scores compared to a i7/current get video card? Is this something to do with Rage's *awesome* textures or?
AMD, please up your game. Stop being just the lower price point and become the leader. We do not need 10 to 20 processors to choose from. Let's have less products and better performance, even if the price is the same or more than Intel. I like that you stick with the socket longer than Intel, but you keep getting beat everytime. AMD needs new people who can drive them further ahead.
New architectures always require software optimization to shine. Will enough Bulldozers sell to convince major software vendors to do that work? Could we get some AMD optimized benchmarks? In any case, I prefer the APU strategy, given my compute needs.
I invested in AMD last a while back and like many on here thought that BD was going to restore AMD's prominence as the enthusiast's definitive choice once again. Thankfully, especially now, I pulled out my money when it was still safe and also decided to not wait for my next upgrade rig and thankfully (now) I went Z68, I7 2500K and GTX 580.
Maybe my hopes for AMD CPU wise was a pipe-dream. It just seems that INTEL is so far ahead of them now with their rather aggressive development that AMD is destined to be the bargain basement alternative forever. Regardless of my lack of realism or false hopes I suppose, its still a downer.
At a certain point in the not too distant future, the x86 market will be stagnant enough in terms of sales growth (with mobile devices taking larger and larger shares of the computing market) that it will not pay to spend the exponentially increasing R & D dollars to advance x86 state of the art. At that point, AMD will have an opportunity to catch up to Intel - if AMD survives in the x86 market long enough.
Always remember: no one, that's *no one* not even Intel, runs X86 instruction set in hardware. No one. All these X86 cpus emulate is instruction set through a RISC core. It's only a matter of time until nobody bothers with the emulation.
Amd's CPU division had to have known that the performance of this architecture for years now... So I imagine that this might not be just a "Let's just get something out there" move. So maybe there's some alternative ways of looking at this:
1. Amd has made it clear that this is the beginning of what they really mean by APU and there will be more architecture movement towards that end in the future. Could this be a transistion CPU to a possibly very different x86 future?
Could it be that this chip has a better set up than phenom for an interchangeable modular structure?
2. With desktop pcs in decline could this be a move to capture more server space, which is still a growing market?
3. This obviously also begs the question is there some benefit of this design with a gpu attached?
4. Is there some benefit for mobile computing?(starting to stretch here...)
I don't know. I don't want to think this is just an epic fail but... This is kind of sad since I was betting they'd be 3/3 but on the really big one the shot out a dud.
Let the wait for Trinity begin. Hopefully the changes from Bulldozer->Piledriver help more than is expected.
If the FM1 platform wasn't dead I would think about getting a Llano desktop for my nephew but I think he'll just have to wait. Maybe if FM2 mb start coming out and Llano can drop in with a possible change to Trinity later.
Yeah, I think I meant #3 to be part of #4. I cant help but to think that is bulldozer performs poorly in a trinity package that's very bad news for Amd, even if this performs well in servers..
I am left wondering how much of bulldozer as an architecture is an evolutionary step to the future they've described. I think that's much more important than the performance we see today. Perhaps this could in that respect be a Fermi and not so much Pentium 4.
Their own last generation 6 core Thuban beats this in half the tasks. If you have an application that effectively uses 8 threads, this might be a worthy upgrade, but anything lightly threaded is pretty bad (looking at the real world benchmarks, Techreport, Anandtech, Tomshardware, etc). I had high hopes for this.
They did say though that Windows 8 would make better use of modules (vs cores) and will know what to do with them better, and to expect another 10% increase from that. But we're still at a point where an AMD core doesn't even beat out a Nehalem core, let alone a Sandy Bridge or Ivy Bridge.
*Le Sigh* This Bulldozer is more of a Mudshovel. Their goal of 15% single threaded performance increase per core per year won't have them catching up to Intel anytime soon either.
Well, the optimist in me says the L3 cache-less higher clocked quad core mainstream parts will be more competitive. And cheap too, the 6 core FX is only 169.
Was pretty hyped for this chip to come out, and if it was a decent performance boost, I was ready to upgrade from my Phenom II 1090t. It looks like it'd barely an upgrade in many areas and a definite downgrade in others. With a whole new architecture and all the hype, I expected a lot more, at least better performance than the current generation of AMD chips. Very disappointed.
It's a CPU review, so settings were selected to show both CPU-limited (or at least more CPU limited) and GPU-limited (or more GPU limited than CPU). And of course, that's only one facet of the overall review.
I read this extensive review. However after the last page, your mention about "windows scheduler problem" made me to think this tests might be Biased. So, I thought of posting this comment.
Windows is compiled using Intel Compiler which optimises code very well for Intel processors but doesn't do that for AMD. Where as Linux is mostly compiled using GNU GCC. So, Linux bench marks would be more neutral for both Intel and AMD processors.
Also, now a days a lot of Desktop users have started using Linux Distros like UBUNTU, and in servers Linux is mainstream.
Server users would be greatly benefited by a Linux CPU benchmark.
So, I would request you to include Linux benchmarks for processors in your reviews.
Server benchmarks will be coming from Johan at some point, though obviously those require a lot of time to put together. As for the compiler of Windows, you're never going to change that, and Windows is still 90%+ of the market. Ultimately, as a hardware manufacturer you need to make hardware that runs the stuff people are doing faster than the competition or there's not much point. It's like having the world's fastest GPU with crappy drivers: no one will like it because it can't run games.
Wait wait wait wait wait, wait some more, wait for years... then.... bleh. Doesn't even match up to the last product. That is not the way to move the bar forward or stay in business, AMD.
At some point why didn't someone just say.... you know what? This thing may sell like hotcakes for servers but just doesn't make sense for desktop. Sell the Opterons to get that server profit margin, and just die shrink the Thuban and make it faster for desktop. Know when something is a lost cause and fold the hand already.
Any chance for a few benchmarks at the overclocked settings? it would be quite interesting to see, as the 4.6Ghz range seems to be a common OC for an i7 2600k (Obviously this would be slower, but it'd be nice to see)
Even the most hardcore AMD Fanboys are having a hard time defending this bulldozed Bulldozer chip. I have seen people blaming anything from Global Foundries to saying that "It's not all that bad." If you're an AMD Fanboy in Denial, you should read what's wrong wigt Bulldozer: 1) AMD's marketing is extremely misleading, because Bulldozer is not more of a Quad Core chip than Intel's Core i7 for example. What AMD did was to create a crude implementation of CMT (chip-level multithreading), which is debatable as of now if it's any better than Intel's Hyper Threading, with the mention that Hyper Threading takes less than 5% of DIE Space per Core. Basically each Module (as AMD likes to call its Cores now) has an additional Integer Unit, but the problem is that the additional Integer Unit shares the Fetch and Decode units with the other Integer Unit in the Module. Ergo, the CPU is not a true 8 Core. It is just a cheap AMD marketing gimmick. 2) The reason for the horrendous single threaded performance is the poor design of the Integer units in the modules (you can interpret this as very low IPC count). Couple that with the fact that two Integer Units share the same Fetch and Decode units, and when one additional thread is added that screws things up and the pipeline needs to be flushed, it messes things up for the other Integer Unit as well. At a bare minimum AMD should have added a Fetch and Decode unit for each Integer Unit. But wait, if they would have done that, then they would have needed to add a dedicated FPU for each separated Integer unit. Hell, this is almost worst than Hyper Threading. At least Hyper Threading makes use of unused resources in the CPU. AMD's implementation can leave resources unused. Bottom line: no matter how you slice and dice it, the Bulldozer chip cannot be called an 8 core chip. It's a 4 Core CPU with CMT. 3) In order to compensate for the design flaws in each Module (what I've described in point no. 2), AMD increased the L2 cache to almost absurd levels for each module, to mask some of the latency created by their poor CMT implementation. Intel's Sandy Bridge does away with 256KB L2 cache per core. Hence, we have ~1.6 Billion Transistors in Bulldozer, which creates leakage, which in turn translates into high power draw. Those of you that defended this failure of a CPU claiming that it was designed for servers should know that Performance / Watt is all that matters for servers, because you run into cooling issues otherwise, which translate into high energy costs both for the servers themselves and for the cooling which is done mostly by Air Conditioning. And we all know that Air Conditioning draws lots of electricity. Bottom line: it's not Global Foundries fault, it's AMD's fault for not designing a more efficient CPU with less transistors (Sandy Bridge quad core has ~1.12 Billion).
After years of being an avid amd fan. I finally switched this year to Intels sandybridge CPU. Amd could no longer compeat with the pure power of the sandybridge core. I was really hoping the bulldozer would re-write the books and put amd back on an even footing. In some ways they have re- written the books as the architecture of the new CPU is quite a change, however for now at least they don't pose ant threat to Intels superb sandybridge core. Not only are sandybridge CPUs fast, they are also fairly low power and run very cool. What more can you say really. Fast, cool and efficient. Perhaps as amd develop it further then improvement may come, however don't expect intel to sit still either.
I was hoping for Bulldozer to address AMD's weakness rather than continue to just focus on its strengths
UNQUOTE
And while the new chip does feature a slight improvement in mutli-threaded apps, it's pound per pound performance is slower than AMD Phenom II X6 1100T
But it's not a big deal. There are greater problems in this world that cpu debates. At least we have AMD and Intel. Imagine if there was only one company making processors - my god that would be a problem then...
The Bulldozer would be a dynamite server processor provided the OS was optimized for the cores that are available.
I think the processor has real potential but needs further optimization with a latter stepping.
As for PC useage, performance of the FX-8150 is not bad but I could do better with an i7-2500. I am a low power person and if I were opting for a PC upgrade tonight at NEWEGG, I would choose the i5 2400S and a X68 Intel motherboard.
I don't know about the rest of you, but personally, if I'm going with an 8 core CPU, I'd go water cooling anyhow. Air cooled pc's should be on their deathbed and water cooling should be the norm. AMD and Intel are leaving more CPU's unlocked for overclocking anyhow. What do you think the performance would look like at 5 GHz/core compared to stock speeds?
Personally I wonder if soon they'll require water cooling and quit worrying so much about the TDP(within reason of course), then they can concentrate on the clock speed.
Belard, water cooling is what many of the higher end gaming systems already run. What makes it stupid? It's far more effective than air cooling, it just requires more equipment and is a little more costly. You say a 25% overclock won't make up the performance difference, but what about possibly going up to 6 GHz/core with water cooling? Do you really think that wouldn't have some pretty good numbers? Hey silverblue, I'm not sure.
This is a rather naïve sounding post, but it just occurred to me and I figured I might share.
Now, I'm putting a lot of faith in simple marketing gimmicks here, but bear with me, and you might find this excellent food for thought.
When I first discovered the leak conceding AMD's new Bulldozer consumer CPU's, I was kind of put off by AMD's naming scheme. FX 8150 seems like such a small number, and obviously wouldn't appear appealing to the eyes of un-savvy consumers. Now, one might find this claim a bit irrelevant, but if you look at history, numbers sell. Even AMD confirmed this when it launched its new series naming jargon, the "A4's, A6's, and A8's." This is quite obviously a marketing illusion to make AMD processors appear better than Intel's Core i3's, i5's, and i7's in an area that most unaware consumers will see first: the name!
That said, I started thinking about processor branding. In the past, AMD has used a really strict branding system for its last two CPU designs. A Phenom II's part name very consistently correlated with the CPU's clockspeed and, therefore, performance. Slap on and extra +5 to the name and you got an extra 0.1 GHz of CPU frequency. Also, the higher end CPU's were always placed in the higher spectrum of the thousands place. The top of the line quad cores populated the numbers 925-1000, while the hexa-cores resided in the 1000-1100's. The rebranded CPU's based on original Phenom and Athlon architectures were given much lower values in the 1000's place, with the very popular 555 BE being a prime example. With Llano, the top-end A8-3850 reiterates this phenomenon. The further the part name extends from the number "4000," the less performance you received from the CPU and GPU, relatively incrementally. So, as you can see, AMD consistently used this strategy to give value to their parts without listing a single specification. Larger numbers generally means more performance, and to the casual onlooker, unfamiliarity with the performance range you actually received from the processors in comparison to Intel's made that sub-$200 price point look really tasty.
So, I say all that to present the following theory. Given that these processors can reach 4.6 GHz on air, and the unicorn-like 5.0 GHz (presumably) on AMD's water cooling solution, there seems to be a lot of headroom for AMD to pull off the most unprecedented comeback in the history of computing. That's right, I'm saying that maybe AMD intends to release new Bulldozer variants with upped clockspeeds and an actual included water-cooling solution for a raised pricepoint
...a raised price point. Could we see a future Bulldozer AMD A8 8950 @ 4.5 GHz with water-cooling bundled for $350 once Global F. Gets it's act together with producing reliable chips? Think about it...AMD's CPU frequency stepping and naming is nowhere to be found with these CPU's, and they are all huddled down around the number 8000. If this is actually the very bottom of the spectrum, this would mean that the very low end Bulldozer variants were on par with the best of Phenom II. Subsequently, the higher end Bulldozer's I propose would have nothing to lose, but anything to gain with higher clock speeds. All they can do is go up! With higher clockspeeds, Bulldozer could make up for all its woes seen here today in single and double threaded applications, which comprise nearly 50% of consumer level apps. There's potential here, but I will admit to those of you who find this whole concept absurd, I have my doubts. Can AMD do it? They'd have my eternal respect, and wallet, if they do.
Sooner or later.... someone (perhaps Anandtech) will benchmark a 5Ghz AMD FX 8000 series CPU.
If said 5.0Ghz CPU (water cooled) is still SLOWWWER in any way compared to a $200 intel 2400 (3.1Ghz) or the $210 2500. Who would care to buy such a $300~350 chip?
Okay.. I upclock the 2500k to 4ghz and it kills the 8150 at 5~6Ghz.... Nobody buys the 8150 or higher. It just doesn't matter... its too slow.
They support FMA instructions but then don't fuse multiply and add micro-ops to *make* FMA instructions (as far as I can tell from the article). That's stupid.
The way they've done it, everyone has to get a new compiler to take advantage of their chips. If they created FMAs in the muop-fusion stage, then even older software would get a boost too.
You can absolutely not fuse mul+add on the fly to fma as the results will be different. Now you can argue the result is "better" by omitting the rounding after the multiplication but fact is you need to 100% adhere to the standard which dictates you need to do it. Software might (quite likely some really will) rely on doing the right thing. For the same exact reason compilers can't do such optimizations neither, unless you specifically tell them it's ok and you don't care about such things (it's not only standard adherence but also reproducability - such compiler switches also allow them to reorder things like a+b+c as a+(b+c) which isn't allowed neither otherwise for floating point as you can get different results which makes things unpredictable). (gcc for instance has a -ffast-math switch which I guess might be able to do such fusing, I don't know if it will though I know you can get surprising bugs with that option if you don't think about it...)
Thanks for explaining that. I'd kind of assumed FMA would just round as if the MUL happened first. Defining it "correctly", they've thrown away a lot of compatibility for a really marginal increase in accuracy!
Nope, it's not marginal. Basically with your "fused madd" you'd get code which on Bulldozer gives slightly different results than on any other CPU.. silently. This is called a bug. It is just not acceptable for a CPU to produce "optimizations" which alter even slightly the expected numerical output, because then the programs which run on them would fail in very slight and hard to track ways.
That isn't quite correct. There is a real demand for fused multiply add, not doing rounding after the mul is something which is quite appreciated. You just can't use fma blindly as a mul+add replacement, but it's perfectly defined in standard floating point too nowadays (ieee 754-2008). Besides, it would be VERY difficult for the cpu to fuse mul+adds correctly even if it would do intermediate rounding after the mul. First the cpu would need to recognize the mul+add sequence - doable if they immediately follow each other I guess, though requires analysis of the operands. Unless both instructions write to the same register it also wouldn't be able to do it anyway since it cannot know if the mul result won't get used later by some other instruction. This is really the sort of optimization which compilers would do, not cpus. Yes cpus do op fusion these days but it's quite limited.
Not trying to argue with you about the accuracy issue - if "FMA" is defined in a certain way, that's how it's defined and an instruction that rounds differently is a different instruction.
However, imagine AMD could implement a "Legacy FMA" (LFMA) instruction in their FPU - which would round as if the MUL came first. You could then fuse MUL, ADD pairs into LFMA instructions without producing bugs. Not sure whether the two types of FMA could be done on the same hardware (they are basically different rounding modes) without a large overhead though.
I don't really understand why there's a big demand for not rounding after the MUL because normally these instructions show up in code like for (n=1000;n>0;n--) total+=a[n]*b[n]; ...and the potential rounding inaccuracy comes in the add stage: there are often lots of adds in sequence, but not normally lots of MULs, and adds suffer more often from the problem of accumulating many small values. Anyway, I know in my code there are lots of instances of doing the "multiply add" operation, and it would be nice to have some sort of CPU acceleration for this.
The Party is over! AMD is no more! They have successfully designed themselves out of the desktop CPU business. I applied for their CEO position and they went with a Moron! You can all kiss low, desktop CPU prices goodbye! Congratulations INTEL you now have a CPU Monopoly!! We can only hope that Nvidia will come through with a Super fast Tegra that will outperform Intel in the netbook arena.
I don't think AMD can pull a rabbit out of it's hat now. Guess I'll cave in and buy an i7. I feel like I'm going to puke........................
I am a newbie, have quite a lot of questions after reading the review of bulldozer.
1. what is heavily thread tasks?does matlab count as heavily thread task?I heard matlab use a lot of FP resource? If so,how can bulldozer beat i5 with only 4 lower efficiency FPs ? Does browsing a lot of website simultaneously count?
2. if single core cpu A and B have same frequency but different efficiency and work on same task without full load. They will accomplish the task in same time?
3. so if single core/thread performance is very important, the situation I aforementioned(if is true) totally doesn't show the benefit of a high efficiency core? (didn't consider power consumption.)
4. Does many application will let a core fully loaded and they won't split the task to another core? What kind of application suit this kind of situation? Any example?
5. in the case of 2, if another application request cpu's resource, will the core with high efficiency get quicker response?
6. in the case of 2, consider multi core cpus A and B. if one core of these two cpus are nearly full loaded, at the same time, another application request the source of cpu. And the operate system decide to let this application work on another idle core. Will higher efficiency core response fast?
Don't bother....I have been following this saga since AMD's first CPU and, until today, was an AMD Fan boy. Buy an i7 now before Intel triples the price for this CPU!
I am really surprised that AMD didn't at least match the i7 in a majority of the bench marks, and what is even more disturbing is that it sometimes performs worse than a Phenom II X4 on some bench marks. AMD could have tweak this processor with all the time it took and had a stationary target with the i7 so I am perplexed at why they were not able to get it to benchmark at least as well as an entry level i7. Hopefully it will be like the first generation of the Phenom x4 where AMD was able to add an l3 cache and tweak the overclocking abilities so they could rease a product that was at least competitve with the Core Duo processors. I like AMD however they have to be copetitive with Intel and can not afford to give up in some cases a 50% decificiency when compared to the i7.
There isn't much AMD can do with this. They are "planning" on having a TICK TICK TICK 10~15% performance increase with yields, higher clocks and tweaks, which what they and intel normally do.
True, we CANNOT expect AMD to compete directly with Intel. They simply don't have the resources. Not the money, not the talent, not the manufacturing abilities. Perhaps, if they were not HELD DOWN by intel during the AMD32~64 days, they'd have made the much-needed profits to afford a much bigger R&D department. There was a point at the end of the AMD64/X2 dominance in which AMD couldn't make enough CPUs.
If the 8150 was marketed as it is... a quad core CPU and was across the board, no more than 15% slower than the 2600 at a price of $220 (The 2600 sells at $300) then it would be considered a GOOD buy. But its worse than that in performance and price.
It takes years to develop a new CHIP. It would take 1-2 years to fix the problems with bulldog, if they could be fixed. But look at it this way, how did intel fix their P4/Netburst problem? Oh yeah, they developed a whole new design!!
BD is a s flawed as the P4. Its very difficult to FIX a HUGE CPU.. and SB is about half as complex and half the size of BD! So what... AMD is going to add even more junk to the design?
Hence, it costs AMD about twice as much to make such a CPU compared to intel. So do the math. Intel makes more profit per CPU. For AMD to compete, they would need to reduce their price by 25~30% - which means almost NO profit.
AMD is screwed. They'll really need to work with Llano a lot more... and look at burying Bulldozer with something else.
If Piledriver does somehow kick butt (there are no indications that it will) - too bad, a large chunk of AMD users would have already moved on to intel. And when Piledriver does finally hit the market, intel will have already released an ever faster CPU.
While I won't call myself an AMD fanboy - as I own both intel and AMD systems... I've been drooling for a Sandy Bridge like AMD part. I buy and sell AMD systems for years for desktop users. In general, I prefer AMD chipsets over intels, I like your GPUs, etc... With the release of BD (Bulldozer) FX chips... the WAIT IS OVER!!!
My next system will be an Intel... my next customer builds will be intels with 2300~2600k CPUs.
I *CANNOT* sell my customers a sub-standard part, which is what BD is. Why the hell would I have them spend $250 for a CPU that can't constantly compete with a $150 or 2 year old CPUs?
I think we know why Rick Bergman left AMD, I don't see him signing off on such a crappy CPU. Seriously, why bother? Llano OLDER Fusion design is more attractive than this insulting FX garbage.
What AMD has done with the release of these BD/FX chips is created more sales for intel, nothing more. Only a fool would buy an FX 8150... just like the fools who spent $1000 on the Intel EE CPUs (okay, not quite that dumb since these AMDs are 1/4 such prices) These 8core CPUs are actually 4 core, 6 = 3 and 4 is a dual core. An enhanced version of Hyper-threading by Intel 10 years ago.
There is a SEVERE problem when your "8 core" CPU can't surpass intel's $150 dual core CPUs. Why AMD, why did you take a page out of intel and Nvidia and do the SAME stupid thing? This *IS* your Netburst and FERMI all wrapped in one. A BIG, HOT, EXPENSIVE and SLooooow product that doesn't impress anyone, other than the stupidity of the design. You think WE should wait 2-3 years for you to ramp up speed to 5-6Ghz to say you're competitive with TODAYS intel CPUs? I don't think so.
After an hour or so of reading this review, here is what happened. 5 sales for desktop builds have just gone to Intel i-whatever-it-is 2500 & 2600s. You make me and others LOOK LIKE FOOLS waiting for Bulldozer or Bulldog to come out and kick some intel butt. You didn't. No, we were NOT expecting you to surpass Sandbridge (SB)... but if your "$250 8150 Best CPU" was at least up against the 2500~2600 in performance, it would be acceptable. But on Newegg - this $280 CPU is slower in most benchmarks to the i5-2400 which is $180. The 8150's power usage and heat is through the roof from the faster i5-2400 which is $100 cheaper and faster in games and most productivity. No gamer in their right mind would spend nearly $300 for a CPU that is about 25~40% slower than the similar or cheaper priced Intels. Big deal if they are unlocked... so are the K chips, which would only pull ahead further. (We could use a review showing a 5Ghz 8150 vs a 5Ghz 2600K - but I would expect the AMD deficit to remain)
The heatsinks on SB CPUs are tiny compared to AMD... that means less noise, less heat.
If a client needs a custom budget computer, I'd go with a $100~130 AMD CPU... that is it. If AMD wants to compete with the CURRENT Sandy Bridge, the 8150 will need to be a sub $200 part (Hey, isn't intel about to drop their prices??) Their BS "4 core" will need to be $100... but we'll need to see how it performs in the real world... to see if its worth that much money.
This article has over 250 posts in less than 24hrs.... and its the voices of very unhappy AMD users.
I still can't believe AMD went the P4 route. They spent years trying to CHEAT performance and this is the result? Luckily there is lots of demands for cheap CPUs and ATI GPUs which should keep AMD alive.
The thing that bothers me most is the Dirt 3 performance.
According to an AMD rep at the AMD launch press-conference, games like Dirt 3 would be able to utilize the Bulldozer's "8 cores" to deliver awesome performance. The truth is that it does worse than the 1100T (the one I already own).
I think majority of these comments show just how fickle consumerism is in America. Anyways, tomorrow's vision vs. current real world performance is the rats nest.
They obviously pushed this towards server markets. Maybe that's why there wasn't much fanfare with the marketing gurus?
The performance obviously doesn't reach out to the niche market of computer gamers. Let's see how lucrative this becomes if AMD is able to crack the not so trendy server market. Those guys don't like to break old habits. Stability is kind of a big deal.
I can also see how this design creates a plug and play product for many different markets. The downside to that is it's one design for all which has already proven inefficient for Gamers. But what about consumer electronics? They generally want cheap and simple. Performance be damned.
I'd hate to break it to you, but even though Bulldozer was targeted towards the server market, it is a complete non-starter in that segment. Look at the power consumption of the Bulldozer. It's off the charts, and it has less raw performance than Intel chips. I can't imagine any system administrator dumb enough to install Bulldozer chips into any sort of compute or server farm. Why would a farm waste money powering and cooling Bulldozer chips when it would be so much cheaper and higher performance to just use Intel CPUs?
Could just be the ASUS board causing the issues. At any rate, once you overclock past a certain point, power usage just accelerates madly, and you're not going to see these sorts of high frequencies on the server anyway so the point is rather moot. Additionally, with servers, they're a little more focused on power efficiency than with client machines. Magny Cours was a 12-core CPU and the 6176 had a TDP of 105W if I'm correct, so despite its 2.3GHz clock speed, that's not too bad considering.
I also have waited for a long time to finally see if I could replace my phenom ii x4 and x6 with the new super bulldozer. Nevertheless I'm pretty disappointed by the raw performance of this new chip. I began using amd products about one year and a half ago, so I'm not really an amd fanboy ... however began to like them for their choice of not doing the intel shitty hobby of switching 3 sockets every 2 years. Took an athlon x2, x4, phenom ii x6 and two E-350 from them and very happy for what you get (a bang for your money).
However ... looking at the athlons (and even more the phenoms) power consumption is rather disappointing compared to sandy bridge cpus which I recently bought (yeah, I did not want to leave a 24/7 machine on drawing 60 Watts at idle when SB idles at much less with their power gates tech). Bulldozer power gates are rather disappointing. Hoped for much higher frequency or lower power consumption at load due to the transistor shrink. And 2B transistors ????? Seeing as some compenents are shared across cores I think this is WAY too much !
BUT one thing deserves to be said. In my case (but that's just me, eh) I wanted a multicore processor which supported both AES-NI and ECC memory. For ECC you can either take an ASUS AM3(+) motherboard (about 120$ the cheapests of am3+), either a 1366 or a 1155 C202/4/6, which costs about 260$ at least ! For AES-NI the only alternative seemed up til now to go with Xeon which cost quite a bit more. Furthermore Xeon cpus are not so easily to get your hands on. I think that no one gave credit for their efforts to implement AES-NI. If you want a home server that's a very appreciated bonus. Although I can understand why many of the users here are angry at the new chip because it doesn't perform very well in gaming, the choice of amd to disregard single threaded apps in itself is quite good. In a market where in a few years we'll see 40+ cores on a single desktop CPU (well, in server that'll be next year with Komodo !) what the heck can you obtain with a single core ? Idling 39 cores to speed up one core to 8GHz (assuming that's electrically possible). Silicon dictates the limits on the frequency you can use in your chip. AMD understood this long ago, when intel and their S775 tried to surpass the 3GHz wall with the Pentium IV. Since we're going in a multithread world, 40 x 4GHz (something like that at 22 or 10nm I think) will perform WAY better that 1 x 8GHz (provided that apps are well developed). Why INTEL does not understand this (S2011 provides only 6 cores !) ? However with 2B transistors a few more cores could've been added :D AMD has to work better at power consumption. Instead of clock per clock performance they focus on # of cores which is what the future is ! You will say that intel's performance per core is almost the double, but amd cores are twice so ... How long will intel continue to release quads when amd'll get to sixteen ? If you're an intel fanboy just say that amd BD is shit, but I think this kind of strategy is going to pay in the long term. In fact amd manages to put more cores in a single die, whereas intel is always 2 cores behind. That alone shows that AMD engineers aren't idiots !
I guess I was trying to infer that as fabrication processes we shrink to 22nm, etc. How much over head is reduced by plugging in a few more cores vs. new design and architecture?
I'm wondering if AMD just set the foundation for something big?
Seems like they bet on software making some leaps and bounds.
Ok, in several reviews now I've heard about the lacklustre single threaded performance.. Just how bad is it? If you had to compare it to another cpu out there which intel and which amd cput would it compare to?
AMD FX-8150 is about £30 or $45 MORE expensive than the i5 2500k but £50 or $75 cheaper than i7 2600k
As Anand said BD can just about hang on to the i5 coat tails (and he is being generous). If the i5 is noticably cheaper what exactly is the point of BD?
I can understand ebing disappointed in the performance of bd, but when a high end gpu requires 600w, whats another 30w for a cpu? Lower is nice, but how many of us who game and have a nice cpu/gpu combo actually count the watts? Heck, when i got my first i7 920 i got the gtx285 thinking i would later run sli so i have a big psu. Now. I have a i7 970 and the same gpu and can still upgrade to whatever card i want. I tend to think multithreading is still growing, and we will see more apps use more cores, and windows 8 might utilize an fx core more efficiently. But calling bd a failure is rough, amd never said it would trounce anything, we were promised 8 cores and we sorta got them. It is an. Incremental step in the right direction, and i think the future improvements will bear out in favor of this cpu. Just like llano is doing so well in the laptop market, this could do very well in the desktop market.
We must ALSO remember the fact that windows 7 does not know of the special bulldozer architecture, and perhaps that has a role too. Once the threads are optimal allocated, perhaps performance will be a little bit better.
I use linux (opensuse11.4) for everyday work and would love to see if there is any difference. (linux (and other open source software) being open source is far versatile so it is in a better position to take latest cpu advantages offered by amd
I use virtualbox to run windows (in opensuse) [ cant use xen/kvm due to non vt-x/d on intel cpu -- here amd is far better they offer you the latest thereby helping accelerate its adoption]
Also, BD is a new architecture and m sure after refinement it should better AFAIK , its a right step and its now upto AMD if they can pull through with refinement.
My Athlon XP 2500 (1.8GHz) overclocked to 2.5GHz stably, smacked any Intel chip. My Athlon X2 was again a nice 700MHz overclock (can't remember model number). I have an Phenom II X4 965BE. 3.4GHz and 8GB of RAM, 1300MHz @ 6-6-6-18. I'm happy with it. From sleep, Win7 is at the login prompt before my monitor wakes up. I've stopped my PC gaming days. I occasionally encode DVDs to high quality x246/mkv (~3hrs per 2hr movie) queued overnight and it is fine.
AMD was a powerhouse but I've not been overly impressed since the Bartons. I'm quite happy with my setup. Really, crossfire-ing two highend AMD video cards and I'm set for any game. Gaming performance is dependent on video cards, not CPUs. I'm fine @ 100 FPS vs. 120 FPS. Your eyes will more than likely never see the difference.
AMD made a good business decision taking over ATI. They are beating Nvida in many ways, including game consoles. They allow PC gamers to have adequate motherboard/CPU/RAM combos and use the money they save for higher end video cards. Unfortunately, gamers head to Intel because they think they need they highest end CPU and RAM when they really need to sink more money into video cards.
I think AMD is stronger than most think because of the price/performance ratio. If you only had $1000 to build a gaming PC, you'll be better off spending less money on AMD CPU and more on video and still have a faster PC to spend the same ratio with an Intel setup.
I've always wish for one thing: AMD NEEDS TO ADVERTISE! Come up with a nice 6 note jingle (or 8)!
While the benchmarks are very revealing of the "ahead of its time" nature of Bulldozer, I think AMD should've kicked off by focusing on server applications instead of desktop ones.
Considering what I've seen so far I think some additional benchmarks on threading/scaling would come in handy – it would actually show the true nature of BD as, right now, it’s behaving like a quad-core processor (due to the shared nature of its architecture, I presume) in most cases, rather than an octacore. Charting that out might be very revealing. The situation now looks like Intel's 2nd (3rd?) generation hyperthreading quad-cores provide more efficient multithreading than 8 physical cores on an AMD FX.
Don’t get me wrong, we’ve heard from the beginning that BD will be optimised for server roles, but then we’re outside the feedback loop. Shouldn’t someone inside AMD be minding the store and making sure the lower shelves are also stocked with something we want?
A longer pipeline and the old “we’ll make it up in MHz” line reeks of Netburst, unfortunately, and we all know how that ended. Looking at the tranny count, it’s got almost twice as many as the Gulftown, with 27% bigger die size for the entire CPU… which will mean poorer yields and higher costs for AMD, not to mention that either the fabbing process is really being tweaked or the speed bumps will not come at all, as the TDP is already high-ish. Ironically it reminds me of Fermi. Speaking of which… BD may become the punchline of many jokes like “What do you get when you cross a Pentium 4 and a Fermi?”
On the other hand it seems AMD has managed one small miracle, their roadmaps will become more predictable (a good thing from a business perspective) and that will exert a positive influence with system integrators. Planning products ahead of the game, in particular in this 12-month cycle, might do some good for AMD, if they survive the overal skepticism that BD is currently "enjoying".
Bulldozer/Zambezi seems to look more like a server CPU repackaged as a consumer grade one. Excellent in heavily threaded apps, not so hot in single threads.
One CPU that is promised but isn't here is the FX-4170. I would have liked to see some benchmarks on it.
I haven't bought an Intel chip since 1997. But with this BS bulldozer launch, that is now going to change! amd should be ashamed of themselves. I for one will now sell all of my amd stock and purchase Intel. I will probably only end up with a few shares, but at this point, I cannot see supporting liars and fakes. And I will NEVER buy an amd product again, not a video card, cpu, mobo, not nothing! What a disappointment amd is..... All the amd crap I have will be tossed in the trash. I'm not even going to bother trying to sell it. WooHoo amd made a world record OC with a cpu not worth it's weight in dog poo!
Very interesting review. I'd be interested to see Bulldozer's benchmarks when it's overclocked, which, if I am correct, is higher than any Intel CPU can go. AMD seems to have made a turnaround in this aspect - Intel CPUs were historically more overclock-able.
As always, a very detailed review. But what about the capability of the "value" chips? Namely, is it worth it to spend around $100 to replace an Anthlon X4 with an FX4100?
There are a number of us that picked up the X4 a couple years back for its low cost ability to encode and do general NLE editing of video. Is it worthwhile to replace that chip with the FX4100 in our AM3+ mobos? And what kind of improvements will there be?
As you rightly stated, a lot of us are attracted to AMD for their bang-for-buck. Just because the industry as a whole wants to bump up prices endlessly, there are still a lot of us that like to see good comparisons of the performance of CPUs available for around 1 Benjamin.
Frankly, it seems to me the disappointment of AMD fans to be quite excessive. Worst CPU ever? What was then Barcelona, which couldn't compete with Core 2?
Bulldozer, set aside old single threaded applications, is slotting between a Core i5 2500 and Core i7 2600K.
Which other AMD CPU outperforms in any single benchmark a Core i7 2600k?
A higher clocked Thuban with 2 extra cores would have been hotter and more expensive to produce.
Setting aside AMD's stupid marketing, the AMD FX-8150 is a very efficient QUAD core. The performance per core is almost as good as Sandy Bridge, in properly threaded applications.
Then they came with the marketing stunt of calling it a 8 core.. it's not, in fact it doesn't have 8 COMPLETE cores; in terms of processing units, an 8 core Bulldozer is very close to a Sandy Bridge QUAD core.
The only reason why Bulldozer's die is so large is the enormous amount of cache, which i'm sure makes sense only in the server environment, while the low latency / high bandwidth cache of Sandy Bridge is much more efficient for common applications.
I think with Bulldozer AMD has put a good foundation for the future: today, on the desktop, there is no reason not to buy a Sandy Bridge (however i'm expecting Bulldozer's street price to drop pretty quickly).
However IF AMD is able to execute the next releases at the planned pace (+10-15% IPC in 2012 and going forward every year) THEN they'll be back in the game.
Man, you have a lot of optimism. I am a big Amd fan, but even i can remain optimistic after this mess, I mean how do you make a chip that is slow, expensive and losses to it's older brothers. Barcelona was a huge success compare to this, it only seemed bad because Expectations were high, this time around though they became higher because no one expect Amd to actually go backwards in performance. WOW that's all i can say WOW
But maybe you guys think that it's slower "clock for clock" or "core for core". It doesn't matter how you achieve performance. What matters is the end performance.
Bulldozer architecture allows it to have higher clock speed and more *threads* than Phenom. The penalty is single threaded performance.
Again you can't compare it to an hypothetical 8 core 4.0GHz Thuban, because they couldn't have made it (and make any money out of it).
I'll repeat, the FX-8150 is NOT an 8-core CPU. Otherwise the i7-2600K is also an 8-core CPU... both can execute 8 threads in parallel, but each pair of threads shares execution resources.
The main difference is that Sandy Bridge can "join" all the resources of 2 threads to improve the performance of a single thread, while Bulldozer cannot. They probably didn't do it to reduce HW complexity and allow easier scalability to more threads and higher clock speed.
Because the future (and to a large extent, the present) is heavily multithreaded, and because Bulldozer is targeted mainly at servers. (and the proof is its ridiculous cache)
how about some bios screenshots? Is there a way in the bios to disable the northbridge in the chip and use the northbridge on the motherboard? Possibly better performance, or maybe add a new ability to x-fire northbridges? (Yah imah Dreamer). imo, I dont think adding the northbridge to the cpu was a good idea especially if it pulls away from other resources on the chip, I understand what adding the northbridge to the processor does, but does it turn off the northbridge thats already on the motherboard? The northbridge on the chip makes sense for an APU but not for a perfomance CPU, why is the nothbridge even in there. I myself would rather see the northbridge on the motherboard utilizing that space intstead of the space on the cpu. If there isnt a way to turn off the northbridge on the cpu in the bios, i think the motherboard manufactures should include the ability to turn off the northbridge on the cpu. Add the ability to use the onboard northbridge in there bios, so you can atleast get bios or firmware updates to the northbridge and perhaps get more performance out of the cpu/gpu. When the new Radeon 7000 series video cards come out, if I buy this CPU with the 6000 series northbridge in it, am I going to take a performance hit or am i going to have to buy a new processor with the 7000 series northbridge in it? or will they come out with a 7000 series motherboard that utilizes a 7000 series northbridge that turns off the 6000 series northbridge in the chip, which in turn makes it useless anyways. I myself dont like the fact if i buy this product, if i want to upgrade my northbridge/ motherboard, I might have to buy a new processor/ perhaps a new motherboard or am i just paranoid or not understanding something.
Who knows, maybe in the next couple of weeks, Mcrosoft and/or AMD will come out with a performance driver for the new processors. If they would have come out with this processor when planned originally, it really would have kicked butt. instead we get conglimerated ideas over the five year period, which looks like the beginning idea, thrown into a 2011 die. I am i die-hard AMD fanboy and always will be, Just kinda dissappointed, excuse my rants. I will be buying a 4 core when they hit the streets, hopefully in a couple weeks.
From the caching issues, to the bad glofo process, to the windows scheduler, i recon that this processor wasn't ready for prime time. Amd didn't have any choice i mean they almost took an entire year extra for peet sake. Even though my i5 2500 is on it's way, am not stupid enough to believe this is the best the arch can do. Their is a good reason that interlagos cannot be bought in stores, Amd know for a fact that they cannot sell this cpu to server maker, so they are busy working on it, i expect that it might take one or even 2 more stepping to fix this processor, the multithread performance is their so they only need to get a mature 32nm process to crank up the speeds and maintain the power consumptions. IMO
Reviews @ other sites like toms hardware and guru 3d are starting to make this look bad. How come everyone but Anand got to review it with watercooling?? Is this site in such bad terms with AMD?
Water cooling isn't magically going to help performance or power consumption in any way so why does it matter?? When you buy this CPU it comes with air cooling, and Anand was right to use that for this review.
Patrick: The 6000+ is the fastest Athlon 64 X2 dual core processor ever, but what happened to the FX family?
Damon: Patrick, you are right. The X2 6000+ is the fastest AMD64 dual-core processor ever... so why isn't it called FX? To answer that I have to explain what FX is all about... pushing the boundaries of desktop PCs. FX-51 did that right out of the gate, with multiple advantages over other AMD processors, and a clear lead on the competition. Move forward a bit to where AMD put high-performance, native dual-core computing into a single socket with the FX-60. Fast forward again and you see FX pushing new boundaries as "4x4" delivers four high-performance cores with a direct-connect, SLI platform that is ready to be upgraded to 8 cores later this year
I'm a little surprised you only posted Win7/Win8 comparison figures for FX-8150. It would give a much complete picture if you would also post i7-2600k Win7/Win8 comparison.
I think anand handled this review fine. Bulldozer is a little underwhelming, but we still don't know where the platform is going to go from here. Is everyone's memory so short term that they don't remember the rocky SandyBridge start?
At first I was pissed off by being strung along for this pile of tripe. After sleeping on it, I am not completely giving up on this SERVER CHIP: 1) FX is a performance moniker, scratch stupid amount of cache & crank clock 2) I'm sure these numbties can get single thread up to thuban levels 3) Patch windows scheduler ffs Fix those (relatively simple) things & it will kick ass. But it means most enthusiasts wont be spending money on AMD for a while yet.
Biggest problem for a server chip is the load power levels. It just doesn't compete on that benchmark and one in which is VERY important for a server environment from a cost/heat standpoint.
Let's hope that's just a crappy leaky chip due to manufacturing but it's to early to tell.
I've worked in a 'server environment'. of course power consumption is an issue. at the lower clock speeds & considering multithread performance, this is already a good/great contender. virtual servers & scientific computing this is already a winnar. with a few (hardware & software) tweaks it could be a GREAT pc chip in the long term.
There's benchmarks here and there but nothing to say it'll improve performance more than 10% across the board. In any case, the competition also benefits from Windows 8, so it's still not a sign of AMD closing any sort of gap in a tangible fashion.
But Bulldozer is different. Windows 7 scheduler does not have a clue about its "modules" and "cores". So for example it may find it perfectly legit to schedule 2 FP intensive threads to the same module. Instead this will result in reduced performance on Bulldozer. Also one may want to schedule two integer threads which share the same memory space to the same module, instead of 2 different modules. This way the two threads can share the same L2 cache, instead of having to go to the L3 which would increase latency.
All of the above does not apply to Thuban; to a lesser degree it applies to Sandy Bridge, but Windows 7 scheduler is already aware of Sandy Bridge's architecture.
It is, although they're similar concepts. Let's make an example: you have 2 integer threads working on the same address space (for example two parallel threads working in the same process). All cores are idle. What is the best scheduling for a Hyperthreading cpu? You schedule each thread to a different core, so that they can enjoy full execution resources.
What is best on Bulldozer? You schedule them to the SAME module. This because the execution resources are split in a BD module, so there would be no advantage to schedule the threads to different modules. HOWEVER if the 2 threads are on the same module, they can share the L2 cache instead of the L3 cache on BD, so they enjoy lower memory latency and higher bandwidth.
There are cases where the above is not true, of course.
But my example shows that optimal scheduling for Hyperthreading can be SUB-optimal on Bulldozer.
Hence the need for a Bulldozer-aware scheduler in Windows 8.
AMD needs a 40-50% performance gain and they're not going to see it using windows 8. What AMD needs is...actually I have no clue what the need. I've never been so dumbfounded about a product that makes no sense or has any position in the market.
With a 40-50% gain Bulldozer would be even ahead of Ivy Bridge.. and what comes next.
Or are we still talking about SuperPI? Or games run at 640x480 lowest quality settings?
The fact is, almost all single threaded applications are old and they run already super fast on ANY cpu and the difference can be seen only in benchmarks.
All recent performance demanding applications are properly multithreaded, and Bulldozer there is competitive with i5 2500 and occasionally with i7 2600 (and with a 10% boost Bulldozer would be competitive with i7 2600).
And this will become more and more the standard one year from now.
Sure Bulldozer has not met the enthusiasts' expectations, it doesn't perform as people would expect an "octacore" (but it's not, it's just a quad with a different form of hyperthreading and "clever" marketing) and it doesn't deserve the FX moniker.
But still it's the most competitive CPU AMD has launched in years, perhaps with the exception of Zacate.
Not all applications are heaviliy multi-threaded, there is still need to improve single thread performance. And even for those few loads that are competitive in performance, they do it with twice the power draw.
But increasing single threaded performance has a cost, on die space and circuit complexity. Bulldozer has a huge die just because it has enormous caches (8MB L2 vs 1Mb on SandyBridge) which probably will turn useful on server workloads (but that's just a guess). By looking at the die shot, you'd get a 40% die area reduction with "normal" caches. So AMD engineers decided to drop single threaded performance improvements in favor of higher multithreaded scalability and higher clock speed scalability.
We'll see if in the long run this will pay off.
I agree power consumption doesn't look good in comparison with Intel, but it does look good in comparison to Thuban.
This is the first released silicon of Bulldozer.. i expect power consumption to improve with newer steppings and silicon process tuning.
That being said, Intel has the best silicon process in the whole industry. AMD can't compete with that. But i'd guess that at lower clock speeds (like in server), AMD's power consumption will improve a lot. Looks like with the FX AMD tried to push their current silicon to the maximum which they could (within the 125W TDP which is sort of an industry standard).
Some people are missing the point. At this stage in the game, processor speed is a moot point beyond benchmarks. AMD and Intel make very fast CPUs in relation to what gamers and every day users use them for. Intel CPUs are blazing fast and AMD CPUs are fast. The average Joe does need more than a dual-core CPU. If you were going to actually do something that would require heavy multi-threading, then it comes down to the efficiency the app to make use of the cores and the ability to use hyperthreading. If you wanted the most performance for a mult-threaded application, you would pick more physical cores over virtual cores. So for most of use it comes down to bang for buck.
8 cores is better than 2, 4, or 6 for true multi-threaded capable applications. For speed tests Intel wins hands down.
If you were sitting next to someone playing a game and all things were the same except CPU, you would not be able to tell which machine is running what CPU. However you would notice if one costs significantly more than the other.
Great review but there is a text error when referring to pass one vs. pass two of the benchmark mentioned in the Subject line. You said:
"The standings don't change too much in the second pass, the frame rates are simply higher across the board. The FX-8150 is an x86 transcoding beast though, roughly equalling Intel's Core i7 2600K. Although not depicted here, the performance using the AMD XOP codepath was virtually identical to the AVX results."
But the graph clearly shows a complete flip-flop from first pass to second pass. When I look closely it appears you ordered the text and graphs differently and were referring to if you had the non-AVX and AVX-enabled graphs next to each other instead of in separate sections. Basically the text and graphs don't match up.
You're an utter retard. The reason they're sold out is newegg advertised these nicely all over their site, including the front page, with "World's first 8-core desktop processor."
There are plenty of reasons to purchase these processors aside from their performance and that's ok. But the majority bought them thinking they're gonna "rock", and those are the ones "showing intelligence." Same goes to you for thinking the majority is well-informed/intelligent.
What's even worse, the 8-core version for sale is the 3.1ghz, not the 3.6 tested in this review. I'm seriously LOL'ing...
How many did Newegg have in stock anyhow? Wouldn't that figure matter regarding your ignorant comment?
I have used AMD products for years. I use Intel at work. So to me there is no real difference between the two for what Business and the Average Computer user want or expect. Does it run, does it do the work I require of it, and do my programs and Network Access work well and are reliable?
Intel indeed has incredible Processors, fast and reliable, and in the high end - expensive.
AMD is Low and Mid range - with processors that the average person can afford. Who is the most innovative - both. Today Intel has been , now I think the user needs to give this New X86-64 Architecture a chance.
I have a Asus M5A99x EVO with an FX6100 installed. The only problem I have had is having to upgrade the BIOS to accept the new Processor. So far I have had the Processor to 4.2 Ghz. Though AIDA 64 caused a BOD on one test. At 3.8 Ghz runs like a champ. Stil working back to as close as I can get to 4.2 on Air.
After three years I have retired my old Phenon II Tri Core 720 for this, and it works for me. I am not an extreme gamer, etc. But test it your self, before being too overly critical.
Does it work for me.
As an aside, next a SSD for faster response.
For those interested:
Asus M5A99x MB BIOS 0810 ( Newest) AMD FX 6100 at 3.8 ghz Corsair Vengeance 1600 - 16 gigs HIS Radeon HD 6850 Windows 7 Ulimate 64 HPLP2475W Monitor at 1920x1200 DP WD 500 SATA WD 1001 SATA LG H20L BD-R Plextor DVDR Enermax 620 Liberty PS - I know old but works.
I have been having a hard time writing a comment on this topic without drawing fire from trolls.
This review is hogwash without more information.
If the hardware is the same on all test machines, apart from the CPUs, then there is no wonder the performance was so bad. 6 Cores are going to utilize, and I am just pulling a number out of my ... hat, 4gb of RAM more efficiently than an 8 core using simple kitchen math. No need to break out the slide rules. It's a known fact, to most, that the big bottleneck in the multicore/multiprocessor world is memory. Mind you that's if we are factoring in that all the code that was used for testing purposes was written with multi-threading in mind.
You just can't compare apples to bananas in this manner.
The linux kernel is more or less straight C with a little assembly; it is much easier on a compiler frontend and more likely to stress the backend optimizers and code generators.
Chromium is much more representative of a modern C++ codebase. At least, it is more relevant to me.
Whats the point in having 8 cores, if its not even as fast as an intel 4 core and you get better performance overall with intel.. Heres the BIG reality, the high end 8 core is not that much cheaper than a 2600K. Liek $20-60 MAX> Youd be crazy to buy an 8 core for the same price as an intel 2600K...
Well, these numbers are pretty dismal all around. Maybe as the architecture and the process mature, this design will start to shine, but for the first generation, the results are very disappointing.
As someone who is running a Phenom II X6 at a non-turbo core 4.0 Ghz, air cooled, I just don't see why I would want to upgrade. If I got lucky and got a BD overclock to 4.6 Ghz, I might get a single digit % increase in performance over my Phenom II X6, which is not worth the cost or effort.
I guess on the plus side, my Phenom II was a good upgrade investment. Unless I'm tempted to upgrade to an Intel set up in the near future, I think I can expect to get another year or two from my Phenom II before I start to see upgrade options that make sense. (I usually wait to upgrade my CPU until I can expect about a 40% increase in performance over my current system at a reasonable price).
I hope AMD is able to remain competitive with NVidia in the GPU space, because they just aren't making it in the CPU space.
BTW, if the BD can reliably be overclocked to to 4.5Ghz+, why are they only selling them at 3.3 Ghz? I'm guessing because the added power requirements then make them look bad on power consumption and performance per watt, which seems to be trumping pure performance as a goal for their CPU releases.
A big thumbs down to Anand for not posting any of the over-clock benchmarks. He ran them, why not include them in the review?
With the BD running at an air cooled 4.5 Ghz, or a water cooled 5.0 Ghz, both a significant boost over the default clock speed, the OC benchmarks are more important to a lot of enthusiasts than the base numbers. In the article you say you ran the benchmarks on the OC part, why didn't you include them in your charts? Or at least some details in the section of the article on the Over-clock? You tell us how high you managed to over-clock the BD and under what conditions, but you gave us zero input on the payoff!
I think Anand hit the nail on the head mentioning that clock frequency is the major limitation of this chip. AMD even stated that they were targeting a 30% frequency boost. A 30% frequency increase over a 3.2 GHz Phenom II (AM3 launch frequency i think) would be 4.2 GHz, 17% faster than the 3.6 GHz 8150.
If AMD really did make this chip to scale linearly to frequency increases, and you add 17% performance to any of the benchmarks, BD would roughly match the i7. This was probably the initial intention at AMD. Instead the gigantic die, and limitations of 32nm geometries shot heat and power through the roof, and that extra 17% is simply out of reach.
I am an AMD fan, but at this point we have to accept that we (consumers) are not a priority. AMD has been bleeding share in the server space where margins are high, and where this chip will probably do quite well. We bashed Barcelona at release too (I was still dumb enough to buy one), but it was a relative success in the server market.
AMD needs to secure its spot in the server space if it wants to survive long term. 5 years from now we will all be connecting to iCloud with our ARM powered Macbook Vapor thin client laptops, and a server will do all of the processing for us. I will probably shed a tear when that happens, I like building PCs. Maybe I'll start building my own dedicated servers.
The review looked fair to me, seems like Anand is trying very hard to be objective.
"server space where margins are high, and where this chip will probably do quite well."
I don't see how Bulldozer could possibly do well in the server space. Did you see the numbers on power consumption? Yikes.
For servers power consumption is far more important than it is in the consumer space. And BD draws about TWICE as much power as Sandy Bridge does while performs worse.
BD is going to fail worse in the server space than it will in the consumer space.
For a start, you're far more likely to see heavily threaded workloads on servers than in the consumer space. Bulldozer does far better here than with lightly threaded workloads and even the 8150 often exceeds the i7-2600K under such conditions, so the potential is there for it to be a monster in the server space. Secondly, if Interlagos noticably improves performance over Magny Cours then coupled with the fact that you only need the Interlagos CPU to pop into your G34 system means this should be an upgrade. Finally, power consumption is only really an issue with Bulldozer when you're overclocking. Sure, Zambezi is a hungrier chip, but remember that it's got a hell of a lot more cache and execution hardware under the bonnet. Under the right circumstances, it should crush Thuban, though admittedly we expected more than just "under the right circumstances".
I know very little about servers (obviously), however I am looking forward to Johan's review; it'd be good to see this thing perform to its strengths.
First, in the server space BD isn't competing with i7-2600K. You have to remember that all the current Sandy Bridge i7 waste a big chunk of silicon real estate on GPU, which is useless in servers. In 3 weeks Intel is releasing the 6 core version of SB, essentially take the transistors that have been used for GPU and turn them into 2 extra cores.
Even in highly threaded workloads 8150 performs more or less the same level as i7-2600K. In 3 weeks SB will increase threaded performance by 50% (going from 4 cores to 6). Once again the performance gap between SB and BD will be huge, in both single-threaded and multi-threaded workloads.
Second, BD draws much higher power than SB even in stock frequency. This is born out by the benchmark data in the article.
The reason for mentioning the 2600 is because that's the only comparison we have for the moment. I don't expect Valencia to use as much power as Zambezi even on a clock-for-clock basis.
I never want to see a link to that dickbag again.. His blogs about WW3, bigfoot, & 2012 should be enough to give you an idea. Also, the dudes at Kubuntu basically do repackaging & KDE integration. They don't touch hardware. No doubt there can be significant gains through software. But I would rather stab myself through the eyelid than read anything more by that 'person'.
Turns out he was one giant troll, or seemingly so. Now, he's pointing the finger at...
"Problem solved, it’s just a thermal protection issue, people have been pushing voltages too high. Maybe there’s some variance in mainboard chipsets, but some overclockers and hitting really good numbers."
Really? After all that hoo-har about registry patches, BIOS flashes and the like, we're now blaming thermal protection? I'm taking this with a litre of Dead Sea water.
Indeed, much much better performance was expected from BD. I was an AMD focused PC buyer since 2005, at AMD "golden age", when I purchased AMD Turion-based laptop. That CPU was actually better than the corresponding Intel competitor at the moment - Pentium M Dothan, as probaly some people remember.
We know the rest of the story since then till now...
But the released BD-based product in its current state seems to be barely concurrent at all on the desktop market. Presumably, its popularity will be much lower, than in case of previous Phenom II lineup...
Why are there no benchmarks with it overclocked... especially gaming? Would be relevant as these processors are shipping unlocked as standard.. all I'm asking for is a reasonable overclock on air to be included...
I think they rushed all wanting to position their review as the first, if you read the other post of the network goes bullozer better positioned than the i7 2600K in many things over which a pricipio dicen.el problem was in the bios the asus and gigabyte motherboards, released immature bios fact overclock would hold more, as you may ASRock and MSI makes a bulldozer to 4.6 ghz be better than the i7 and i5 5.2GHz oc do not believe me check this and read well. 1.-http: / / www.madboxpc.com/foro/topic/161318-la-verdad-sobre-el-amd-fxo-bulldozer/page__st__20
Well, the situation among AMDs CPU is still the same...good ideas, great expectitions and manufacturing delays resulting in inappropriate results compared to Intel. Bulldozer would have been a way competitive 2 years ago, not these days. At this point AMD desperately needs way higher clock speeds and core optimizations to be competitive..the predicted 10-15% performance per watt increasing each year is really funny when compared to planned intel´s cpu roadmap (just known information that 1Q/2012 to-be-introduce ivy bridge´s TDP in top performance class is to drop from 95W to 77W, that is almost 20% only in power consumption - not to mention performance boost caused also by 22nm manufacturing process). I am worried, that the performance gap between intel and AMD cpus is going to broaden in the near future without "any light in the darkness bringing the true competition in the CPU field".
waiting for the BD to come, but now, what a disappointment, but AMD should continue to compete with intel, otherwise, there wont be any battle to watch. I love to see a good pricing from AMD.
I think you are wrong on one point, about the FPU. You claim that one bulldozer module has the same FP capacity as earlier AMD processors. However, in reality it has twice the (theoretical) capacity Whereas each K8/K10 core had one 128-bit FP unit, each bulldozer module has 2 x 128 bit FP units. They can work together as one 256-bit, when used with the new instructions (AVX and others). See for example this page for details: http://blogs.amd.com/work/2010/10/25/the-new-flex-...
However, it is strange that this does not show in performance. Could anyone explain this to me?
"in single threaded apps a good 40-50% advantage the i5 2500K enjoys over the FX-8150." These are the apps most people use, duh. core for core Bulldozer is epic fail. This is not going to be a popular desktop chip at all. As for servers, AMD's share has dropped from 20% to 5,5% in the last few years. I doubt this chip will be the savior.
They have lost ground in the server market, so a radical new design wont make a difference...? I admire your logic. For the record I specifically look for programs/games which are multithreading, it often shows good programming on the whole. Unless of course there are other factors limiting the system (like net speed, or gpu). Perhaps I'm just ahead of the curve compared to you're average troll, duh.
I think the main competitor for Intel in the future is going to be the ARM processor makers. As Intel goes in to that space with the x86 and the ARM chips getting faster and faster and Windows 8 supporting ARM, you get a mix and soon ARM chip will invade the desktop/laptop market.
I decided to try AMD when I "inherited" my brother's older socket 939 hardware some years ago, then built my own using a Phenom II X4 940 BE.
At the time it was released, the 940 wasn't too far behind the i7 920 in many respects, plus it was about $70 cheaper...I was very satisfied with my decision. However, after 3 years of advancement by both companies and watching Intel ONCE AGAIN come up with something that gives excellent performance with ever-increasing power reduction, I was on the fence about Bulldozer even before the reviews came out.
Once I saw the majority of the reviews, I knew what side of the fence to be on for obvious reasons..."Bulldozer" just didn't hit the expectations I thought it should, especially when it comes to load power consumption. Perhaps in a couple years when it matures, but I didn't feel like waiting for AMD to iron out all the wrinkles.
My next build is already done and sorry to say, it's NOT AMD. For what I do the i5-2500K is just too good to pass up at combo prices that result in a $200 processor ( less than what I payed for my X4 940 when IT was new).
Best wishes AMD, I hope you can make "Bulldozer" work, but for now "BD" stands for "Big Disappointment". I'll check back with you in a year or so to see how things are doing.
Intel's first "dual core" was actually 2 processors on one chip.
They could have saved a lot of engineering time by merely shoehorning two X6 Thuban processors together at 32nm and sell it as a 12 core. Now that would have rocked!
Does anybody remember the first Intel processors with the entirely new architecture called Core Duo, Conroe-L or something ? They were pretty lousy at first, with slightly higher performance than the previous generation, but constantly overheating. later the Core 2 Duo was a complete success, not to mention the first generation iCore processors and of course Sandy Bridge. Considering the fact that these Bulldozer processors are AMD's first attempt at a completely new architecture, I say that both performance and power consumption are at reasonable levels. Upcoming models will surely do a lot better.
I'd love to see how well the SMP client runs on an "8 core" Bulldozer part compared with a quad Sandy Bridge, and for that matter a 6 core Phenom 2 and 6 core Nehalam.
It SEEMS like it should do really well, right? Or not? Because basically an 8 core Bulldozer is a quad core when it comes to floating point, right? And Folding uses a lot of floating point? Or...?
Also, if it really has double the transistor count of Sandy Bridge...where is the performance? It seems like even on heavily threaded stuff it's just kind of about equal with Sandy Bridge, which doesn't seem right....
Considering the power consumption and the reported problems with many games, e.g. Dues Ex, Portal 2, Shogun ... I would see this more appealing around the $180 mark. The 1100T is a better buy if you must do AMD.
The statement is partially true. There are quite a few apps that the Thuban outnguns BD and many cases where it out performs on energy effiency as well.
I wanted to say one thing i dont have one but a friend of mine does and he showed me somthing my i5 cant do he was playing a game called crissis if thats how u spell it and running a video editting program at he same time well i cant do that with my i5 if i did the game would start to lag crissis takes alot out of your cpu bad programing even video cards have trouble with the game but bd seems to muti task better then what my i5 can do just wondering if its more for peeps who do alot of stuff at one time.
Thanks for pointing that out because not so long ago i saw a video on amd's web site where they were showing of a amd Llano notebook vs a intel sandy bridge core i7 notebook they started the same benchmark on both notebooks and the intel was quite fast but as they open more and more programs at the same time the intel starts to drop in performance where the amd is running stable. So my suggestion would be to run all benchmark on the bulldozer and i7 2600k again but this time open about 10 or 20 other programs a the same time then u will truly see the bulldozer shine. I am not a amd fanboy my current build a intel Pentium G860 and i am very dissapointed in myself i shouldve gone with the amd q640 it was around the same price when i bought it. My next build will be a Amd FX4100. HA
Well I very excitedly bought a 8150 based system for number crunching as the performance/$ looked very good. I could buy a "quiet" system for Aus $ 1130 with SSD and only 8Gb RAM. I had previously purchased a Intel i7 2600K, but could never get it to overclock and run 64 bit Java app (Napoleon Spike from DUG) 24/7, it fell over after 6 hrs or 12 or 23 or 47, it always fell over despite water cooling. Now the bulk of my work is done by Xeons in the rack, with a couple of dual 5680's systems doing the heavy lifting (2 x 6 core + hyperthreading looks like 24 CPU's to OS). These are good stable systems with 96Gb RAM, but high overall system cost. I wanted a few cheap and moveable fast CPU's. Boy did the Bulldozer fail to deliver More is Better measure in Bytes inversion throughput/minute BD 8150 115-123k in 8/8 threads i.e. flat out i7 2600 237-268k in 8/8 threads i.e. flat out Xeon dual 5680 333-356k in 12/24 threads i.e.half loaded i7-870 166k in 8/8 threads i.e flat out Xeon Dual E5520 190k 12/16 threads Xeon Dual 5430 132k 8/8 threads
The Bulldozer is the slowest and the newest....very poor performance. Eclipsed by Intel at similiar price point. I might as well replace the MB and CPU and go with i73960 or 3930...
I dunno if anyone noticed, but if u study the architectures carefully, then what AMD calls as a 'module' is comparable to a 'core' of Intels. Intels Hyperthreading allows two logical thread executions per core. But AMD's TruCore theory, only allows one thread per core. The Intel i5-2500K has 4 physical cores and 8 logical threads. Compared to that the most powerful of the AMD, the FX-8170, contains 4 modules which can execute 8 threads, with 2 cores per module, each core executing 1 thread. On the other hand the i7-2600K contains 6 physical cores and 12 logical threads. Hence by no chance, can the FX-8150, can match the capability of the 2600K, as the latter as 2 more cores to add to the power. As for the results of the benchmarking, it also agrees with the fact that the FX-8150 is comparable albeit a little less powerful than the i5-2500K, because of the architecture difference between Intels core and AMD's Bulldozer.If AMD ever brings out (according to them) a 12 core FX processor (Prob. FX-12XXX), then it would be really interesting to see how that matches with the i7-2600K. Altough the shared L2 cache architecture, is what may be detrimental to the performance of these processors.
Something is wrong. If I look at a die shot of Llano then the core is about 1½ times the size of the 1 MB L2 cache. If I look at a Bulldozer module, it is about 1½ times the 2 MB L2 cache. To me this indicates, that a Buldozer module is about 100% larger than a phenom II core which is far from the 12% more core size, which AMD has previously indicated was the cost of adding another core to form a module. The 12% was expected to allow AMD to add nearly double the core count on a given process node to convince the server market and give plenty of die space for the GPU on the Llano APU. Where am I wrong and what is right?
I beleive amd realy missed it shoot badly, but it is still the right social choice caus what will happen if intel get x86 monopol and they are still resonably priced and whene you have to live with it in every day life will you realy notice the diferance in perfomance. Unless you realy to go for all the top of the line in every part of your system you will got for the top of intel i7. But i'v never did and alway ended up with reliable good perfomance amd sys for less than 800$ counting with the power supply i had to replace. this year. my point unless you want a death machine go for amd and you will feel better with your self ;). PS. sry for the terible english.
There is no doubt that whatever critiques have been posted are valid but I skimmed a few pages and saw no "Consumer" comments.
I have purchased an 8150 with a AMD3+ motherboard and will be putting the unit together.
In my days since the Z80 and 48k this represents the nicest cpu ever for me. That it was affordable and that I will have 8 cores to task with my hobby programming such as trying to factor RSA-numbers or the ilk the AMD 8=core is a dream system for the price.
I picked up case, mother board power supply, 1.5 TB drive DVD, 1 gb video, 16 gb ram, 28 inch monitor, wall mount for monitor so I can have two 28's with one the long way for source code and perhaps something else.. Anyway $1200 is the cost. Now this is my first bare-bones experience too so all in all it is exciting to get such a dream machine and I am happy to step forward and support AMD
I don't know what awaits when the memory arrives and I boot up but it feels like Starship already and I have vowed to learn OpenMP under GCC to advance into multi-core programming.
So perhaps there will be issues. perhaps this is not all that nor is it wat will come but from where I am at I am still on the AMD home team and my money is flowing in the economy.
I went from trs 80 to Amiga then to twin AMD single core chips on one Motherboard, Moved to the early quad cores dreaming of dual quad cores when a system with 8 cores of that day would have cost $4900 and now picked up a system that as a boy in 1973 I would have considered Alien-ufo technology for about what I paid for dual single core chips just a few years ago.
So BullDozer can't be all that bad. The price is good! I will see how she runs. I often peg cores at 100% for days when searching for RSA factors.. Looks like I get more bang for the same bucks this time and I am all for that.
Thank you AMD for such a wonderful cpu. I plan to make use and thanks to the motherboard I can watch out for heat issues much easier than ever,
Not to mention it looks like the sound system is way advanced over the last computer as well.
So from a consumer / hobby programmer point of view this is very cool indeed.
Thank you for being the first to actually contribute some real world response to this architecture. So many trolls on this thread that are intel fanboys.
Also, if your using xen with this thing, I would be interested in seeing some feedback on how multiple guests(like more than 4) act when trying to fight for floating point processor time. Be interesting also to see if 4 floating point threads and 4 integer threads can all run at the same time with no waiting. That might be asking too much for now tho.
What kind of retarded person would benchmark at 1024 x 768 on an enthusiast site where every one owns at least 1 1920 x 1080 monitor as they are 1. Dirt cheap and 2. The single biggest selling resolution for quite some time now... Real world across the board benches at 1920 x 1080 please!
I am not trying to discount the reviewer, the performance of Sandy Bridge, or games as a test of general application performance. I have no connection to company mentioned really anywhere on this site. I am just a software engineer with a degree in computer science who wants to let the world know why these metrics are not a good way to measure relative performance of different architectures.
The world has changed drastically in the hardware world and the software world has no chance to keep up with it these days. Developing software implementations that utilize multiprocessors efficiently is extremely expensive and usually is not prioritized very well these days. Business requirements are the primary driver in even the gaming industry and "performs well enough on high end equipment(or in the business application world, on whatever equipment is available)" is almost always as good as a software engineer will be allowed time for on any task.
In performance minded sectors like gaming development and scientific computing, this results in implementations that are specific to hardware architectures that come from whatever company decides to sponsor the project. nVidia and Intel tend to be the ones that engage in these activities most of the time. Testing an application on a platform it was designed for will always yield better results than testing it on a new platform that nobody has had access to even develop software on. This results in a biased performance metric anytime a game is used as a benchmark.
In business applications, the concurrency is abstracted out of the engineer's view. We use messaging frameworks to process many small requests without having to manage concurrency at all. This is partly due to the business requirements changing so fast that optimizing anything results in it being replaced by something else instead. The underlying frameworks are typically optimized for abstraction instead of performance and are not intended to make use of any given hardware architecture. Obviously almost all of these systems use Java to achieve this, which is great because JIT takes care of optimizing things in real time for the hardware it is running on and the operations the software uses.
As games are developed for this architecture it will probably see far better benchmark results than the i series in those games which will actually be optimized for it.
A better approach to testing these architectures would be to develop tests that actually utilize the strengths of the new design rather than see how software optimized for some other architecture will perform. This is probably way more than an e-mag can afford to do, but I feel an injustice is being done here based on reading other people's comments that seem to put stock in this review as indication of actual performance of this architecture in the future, which really none of these tests indicate.
I bet this architecture actually does amazing things when running Java applications. Business application servers and gaming alike. Java makes heavy use of integer processing and concurrency, and this processor seems highly geared towards both.
And I just have to add, CINEBENCH is probably almost 100% floating point operations. This is probably why the Bulldozer does not perform any better than the Phenom II x4.
Also, AMD continues to impress on the value measurement. Check out the PassMarks per dollar on this bad boy:
Beware !!!! this chip is junk. I love Amd with all my heart and soul. This fx chip is a black screen machine. It breaks my heart to write this. I am sending it back and trying to snag the last x6 phenom 2 's I can find. The fact that this chip is a dud is too well hidden. When I called newegg they told me your the second one today with horror stories about this chip.
msi would not come clean ...this chip is a turkey.... yet they were nice.
I will waste no more time with this nonsense. my 754's work better.
We need honesty about the failure of this chip and the fact windows pulled the hot fix. tlb bug part two. Even linux users say after grub goes in Black screens. Why isn't the industry coming clean on this issue. Amd's 939 kicked Intel butt for 3 years- till they got it together,we need Amd ,but I do not like hidden issues and lack of disclosure. Buyer beware!
Guys you are already upset because you spent your lunch money on Intel and even with higher this and that boards and memory AMD (even with half as much memory onboard [32GB] & Intel has [64GB] ) Intel is misquoting thier performance again...no matter what you say AMD= Dodge as to Intel=Cheverolet ..and when it gets down to AMD on the game versus Intel ...Intel has another hardcore asswhipping behind and ahead... its the same thing as a Dx4 processor(versus the pentium) even though Pentium had 1 comprehesion level higher ..when running the same programs DooM for example Pentium couldn't run DooM anywhere near as good as a simple DX4 amd..same stays true ...this Bulldozer has already broken unmatched records...AMD only lacks in 1 area..when you install windows the intel drivers already match at least 80 percent performance of Intel ...where AMD needs a specific narrow driver to run...once that driver is matched ..AMD =General Lee versus (Smokey & the) Bandits POS =Intel's comaro and its true ashamed that Intel even with 2x as much ddr3 memory ..cant even pickup the torch when AMD is smoking a Jet on the highway to hell for Intel -Hahahamauhahaha...sorry as intel qx9650 ahahahaahahahahahahahhahahah
watch AMD take Diablo 3 (1 expansion by the next/it will be so ) Intel always lags hard on gaming compared to a weaker AMD class...point proven ...everest has alot of false benchmarks for Intel example NWN2 Phenom x3 8400 (triple core hasa bench 10880) yet a Intel Core 2 Duo e7500 has a bench of 12391 thats a 2.9ghtz cpu versus a 2.1ghtz CPU ..ok the kicker is intel is a dell amd is an aspire..DDR2 memory on the AMD and ddr3 memory on the intel ..all the intel bus features say higher (like they always do) but try running the same dammned video board on both systems then try running 132 NWN2 maps each medium size...no way the intel can do it ..the AMD can run the game editor and the maps at once..Intel is selling you a number AMD is selling you true frames per second..but your going to say oh but my Intel is a better core and this and that..ok now lets compare the price of the 2 systems...Intel was $2,500 the AMD was $400 ..why do you think that phenom just stomps the ass off that intel?(always has always will)
I work as a building architect and use this CPU on my Linux workstation, in a Fractal Design define mini micro atx case, with 8GB ram and AMD radeon hd 6700 GPU.
I usually have several applications running at the same time. Typically BricsCAD, a file manager, a web browser with a few tabs, Gimp image editor, music player, our business system and sometimes Virtualbox as well with a virtual machine.
I do allot of 3D projects and use Thea Render for photo rendering of building designs.
I use conky system monitor to watch the processor load and temperature.
These are my thoughts about the performance:
Runs cool and the noise level is low, because the processor can handle several applications without taking any stress at all.
Usually runs at only a few % average load for heavy business use (graphics and CAD in my case).
When working you get the feeling that this processor has good torque. Eight cores means most of the time every application can have at least one dedicated core and there is no lag even with lots of apps running. I think this will be a great advantage even if you use allot of older single core business applications.
The fact that this processor has rather high power consumption at full load is a factor to take into consideration if you put it under allot of constant load (and especially if you over clock). For any use except really heavy duty CPU jobs (compiling software, photo rendering, video encoding) temporary load peaks will be taken care of in a few seconds, and you will typically see your processor working at only 1,4 GHz clock frequency. When idle the power consumption of this CPU is actually pretty low and temporary load peaks will make very little difference in total power consumption.
I sometimes photo render jobs for up to 32 hours and think of myself as a CPU demanding user, but still most of the time when my computer is running, it will be at idle frequency. I consider the idle power consumption to be by far the most important value of comparison between processors for 90% of all users. This is not considered in many benchmarks.
It is really nice to fire up Thea Render, use the power of all cores for interactive rendering mode while testing different materials on a design and then start an unbiased photo rendering and watch all eight cores flatten out with 100% load at 3,6 GHz.
Not only does this processor photo render slightly faster compared to my colleagues Intel Sandy Bridge. What is really nice is that i can run, lets say four renderings at the same time in the background, for a sun study, and then fire up BricsCAD to do drawing work while waiting. Trying to do this was a disaster with my last i5 processor. I forced me to do renderings during the night (out of business hours) or to borrow another work station during rendering jobs because my work station was locked up by more than one instance of the rendering application.
....................
To summarize, this is by far the best setup (CPU included) I have ever used on a work station. Affordable price, reasonably small case, low noise level, completely modular, i will be able to upgrade in the future without changing my am3+ mother board. The CPU is fast and offers superb multi tasking. This is the first processor I have ever used that also offers good multi tasking under heavy load (photo rendering + cad at the same time) This is a superb CPU for any business user who likes to run several apps at the same time. It is also really fast with multi core optimized software.
AMD FX-8150 is my first AMD desktop processor and I like it just as much as I dislike their fusion APUs on the laptop market. Bulldozer has all the power where it is best needed, perfectly adopted to my work flow.
I don't know what it is with all this hype destroying amd's reputation. The bulldozer architecture is the best cpu design I have seen in years. I guess the underdog is not well respected. The bulldozer architecture has more pipelines and schedulers that the Core 2. The problem is code is compiled intel optimized not amd optimized. These benchmarks for a bunch of applications I don't use have no bearing on my choice to by a cpu, there are some benchmarks where an i5 will outperform and i7 so what valid comparison's are we making here. The bulldozer cpu's are dirt cheap and people expect them to be cheaper and don't require high clock speed ram and run on cheaper motherboards. AMD is expected to keep up with intel on the manufacturing process. Cutting corners and going down to 32nm then 22nm as quickly as possible does not produce stable chips. I have my kernel compiled AMD64 and it is not taxed by anything I am doing.
AMD still hasn't been able to pull out of the rut that INTEL left them in after the Sandy Bridge breakthrough. I am a (not so proud) owner of an FX-4100 in one of my pc's and an 8150 in the other. The 4100 compares to an ivy bridge i3 or a sandy bridge i5. I will give AMD partial credit, though, the 8150 performs at the ivy bridge's i5 level for almost identical prices.
And here we are in 2020 some 9 years after this review and 7 years after your comment and AMD still hasn't been able to equal Intel as an equal gaming performance contender. AMD's only saving face is the fact that now higher resolution demands of 1440p and now 4K essentially make any modern game CPU bound and more dependent on the GPU power.
I always come back to this review every few years just to have a good laugh looking back at this turd architecture, and especially at genius comments like: "You don't get the architecture"; "it's a server CPU"; "it's because Windows scheduler"; etc., etc.
No, it wasn't any of those things. The CPU's a turd. It was a turd then, it's a turd now, and it will be a turd no matter what. It wasn't more future-proof than either Sandy or Ivy, 2600Ks from 11 years ago still run circles around it in both single and multi-threaded apps, old and new. The class action lawsuit against AMD was the cherry on top.
It really never gets old to read through the golden comment section here and chuckle at all the visionary comments which tried to defend this absolute failure of an architecture. It's an excellent article, and together with its comment section will always have a special place in my heart.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
430 Comments
Back to Article
Kristian Vättö - Wednesday, October 12, 2011 - link
I'm happy that I went with i5-2500K. Performance, especially in gaming, seems to be pretty horrible.ckryan - Wednesday, October 12, 2011 - link
I was just going to say the same thing. I was all about AMD last year, but early this year I picked up an i5 2500K and was blown away by efficiency and performance even in a hobbled H67. Once I bought a proper P67, it was on. It's not that Bulldozer is terrible (because it isn't); Sandy Bridge is just a "phenom". If SB had just been a little faster than Lynnfield, it would still be fast. But it's a big leap to SB, and it's certainly the best value. AMD has Bulldozer, an inconsistent performer that is better in some areas and worse in others, but has a hard time competing with it's own forebearer. It's still an unusual product that some people will really benefit from and some wont. The demise of the Phenom II can't come soon enough for AMD as some people will look at the benchmarks and conclude that a super cheap X4 955BE is a much better value than BD. I hope it isn't seen that way, but it's not a difficult conclusion to reach. Perhaps BD is more forward looking, and the other octocore will be cheaper than the 8150 so it's a better value. I'd really like to see the performance of the 4- and 6- before making judgement.It's still technically a win, but it's a Pyrrhic victory.
ogreslayer - Wednesday, October 12, 2011 - link
I tell friends that exact thing all the time. Phenoms are great CPUs but switch to Nehelam or Sandy Bridge and the speed is noticibly different. At equal clocks Core 2 Quads are as fast or faster.Bulldozer ends up with a lot of issues fanboys refused to see even though Anandtech and other sites did bring it up in previews. I guess it was just hope and a understandable disbelief that AMD would be behind for a decade till the next architecture. We can start at clockspeed but only being dual-channel is not helping memory bandwidth. I don't think there is enough L3 and they most definitely should have a shortpipeline to crush through processes. They need an 1.4 to 1.6 in CBmarks or what is thhe point of the modules.
The module philosophy is probably close to the future of x86 but I imagine seeing Intel keeping HT enabled on the high-end SKUs. Also I think both of them want to switch FP calculation over to GPUs.
slickr - Wednesday, October 12, 2011 - link
Yeah I agree. To me Bulldozer comes like 1 year late.Its just not competitive enough and the fact that you have to make a sacrifice to single threaded performance for multithreaded when even the multithreaded isn't that good and looses to 2600K is just sad.
They needed to win big with Bulldozer and they failed hard!
retrospooty - Wednesday, October 12, 2011 - link
Ya, it seems to be a pattern lately with the last few AMD architectures.1. Hype up the CPU as the next big thing
2. Release is delayed
3. Once released, benchmarks are severely underwhelming
JasperJanssen - Wednesday, October 12, 2011 - link
4. Immediately start hyping up the next release as the salvation of all.GatorLord - Thursday, October 20, 2011 - link
It looks to me like BD is the CPU beta bug sponge for Trinity and beyond. Everybody these days releases a beta before the money launch.Hence the B3 stepping...and probably a few more now that a capable fab is onboard with TSMC. BD is not a CPU like we're used to...its an APU/HPC engine designed to drive code and a Cayman class GPU at 28nm and lots of GHz...I get it now.
Also, the whole massive cache and 2B transistors, 800M dedicated to I/O, thing (SB uses 995M total) finally makes sense when you realize that this chip was designed to pump many smaller GPGPU caches full of raw data to process and combine all the outputs quickly.
Apparently GPUs compute very fast, but have slow fetch latencies and the best way to overcome that is by having their caches continously and rapidly filled...like from the CPU with the big cache and I/O machine on the same chip...how smart..and convenient...and fast.
Can you say 'OpenCL'?
jleach1 - Friday, October 21, 2011 - link
I don't see how this can be considered an APU, This product isn't being marketed as a HPC proc., and i don't see the benefit of this architecture design in GPGPU environments at all.It's sad...i've always given major kudos to AMD. Back in the days of the Athlon's prime, it was awesome to see david stomping goliath.
But AMD has dropped the ball continuously since then. Thuban was nice, but it might as well be considered a fluke, seeing as AMD took a worthy architecture (Thuban) and ditched it for what's widely considered as a joke.
And the phrase "AMD dropped the ball" is an understatement.
They've ultimately failed. They havent competed with Intel in years. They...have...failed. After thuban came out i was starting to think that the fact that they competed for years on price and clock speed alone was a fluke, and just a blip on the radar. Now i see it the opposite way...it seems that AMD merely puts out good processors every once in a while...and only on accident.
medi01 - Wednesday, October 12, 2011 - link
Well, if anand didn't badmouth AMD's GPU's on top of CPU's, we would see less "fanboys" complainging about anand's bias.vol7ron - Wednesday, October 12, 2011 - link
By badmouth do you mean objectively tell the truth? Do you blame PCMark or FutureMark for any of that? Perhaps if all the tests just said that AMD was clearly better, it wouldn't be badmouthing anymore.medi01 - Thursday, October 13, 2011 - link
Slightest "problem" imaginable with AMD GPUs would make it into titles.nVidia article would go with comparing cherry picked overclocked board vs standard from AMD, with laughable "explanations" of "oh nVidia marketing asked us to do it, we kinda refused but then we thought that since we've already kinda refused, we might still do what they've asked".
"Objectively", are you kidding me?
JKflipflop98 - Thursday, October 13, 2011 - link
Anand runs the test, then writes down the number. Then he runs the test on the other PC, and writes down the number.If your number is lower, then it's physics "badmouthing" your precious, and not the site.
actionjksn - Wednesday, October 12, 2011 - link
@medi01 Considering the results I think Anand were more than kind enough to AMD.medi01 - Thursday, October 13, 2011 - link
I recall low power AMD CPUs being tested on 1000Watt PSUs on this very site. How normal was that, cough? iPhones "forgoten in pocket" (authors comment) on comparison photos where they would look unfavourably)Thing with tests is, you have games that favour one manufacturer, then other games that favour another. Choose "right" set of games, and viola...
The move with 1000Watt PSU on 35W TDP CPU is TOO DAMN LOW and should never happen.
On top of it, absolute majority of games is more GPU sensitive, than CPU sensitive. Now one could reduce resolution to ridiculously low levels so that CPU becomes a bottleneck. but then, who on earth would care whether you get 150 or 194 frames per second at a resolution which you'll never use?
Stas - Thursday, October 13, 2011 - link
Not sure what the deal is with PSUs or what article you're referring to. I'm assuming it made AMD power consumption look worse than it was because 1kW PSU was running at 10% load, thus way out of efficiency range. But w/e. My comment is mostly on CPU performance in games. Just because you don't run a game on the top-end CPU with $800 in multi-gpu tandem at lowest settings, doesn't mean it shouldn't be used to determine CPU performance. By making the CPU the bottleneck, you make it do as much as it can side-by-side with the GPU spiting out frames while whistling tunes and picking it's finger nails. There is more load on CPU than GPU. Which ever CPU is faster - that CPU will provide more FPS. Simple as that.Sure, no one will see 20%-30% performance difference using more appropriate resolution and quality settings. But we're enthusiasts, we want to see peak performance difference and extreme loads. Most synthetic tests are irrelevant in everyday use, but performance has been measured that way for decades.
jleach1 - Friday, October 14, 2011 - link
I haven't seen one single sentence that was questionable in a and graphics review. In fact I'm glad to say that I'm a big fan of Intel CPU and and hour combos, and have never had even as much as a hint of bias.As a over exaggeration, in an age where were all stuffing multiple cards in our systems, and cards are efficient, reliable, powerful, and they run cool. yes the drivers have sucked in the past, but they don't really.
(emphasis on the word seem)
NvIdia cards have just seemed clunky and hot as hell since the 400 series. I don't feel like gaming next to a space heater. And I definitely don't want to pay 40 percent more for ten percent performance just to have a space heater and bragging rights.
its like amd graphics are similar to intels CPU lineup, they're great performance per dollar parts, and they're efficient. But NvIdia and Intel graphics are like amd CPUs, they're either inefficient, or they're good at only a few things.
The moral? what the *$&* amd....you might as well write off the whole desktop business if the competition IS fifty percent faster and gaining ground....that 15 percent you're promising next year better be closer to 50 or I'm going to forget about your processors altogether.
jleach1 - Friday, October 14, 2011 - link
Intel CPU and amd combos*....sorry for the bat grammar. Writing on a tablet with Swype.CeriseCogburn - Wednesday, March 21, 2012 - link
40% more cost and 10% more performance?You said that's across the board.
I'm certainly glad you aren't the reviewer here on anything. I mean really that was over the top.
CeriseCogburn - Friday, June 8, 2012 - link
They went fullblown favor the bullsnoozer by using the GPU limited amd hd5870 to make the stupid amd cpu look good.Thank your lucky stars they did that much for you.
MJEvans - Thursday, October 13, 2011 - link
I think your later point is exactly why the FPU support isn't as strong. (most) tasks that use FPU appear to be operating on large matrices of data, while sequential processing seems to have a good design idea (even if the implementation is a little immature and a little early), but slower latency l1/l2 cache access. I hope that's an area that will be addressed by the next iteration.Iketh - Wednesday, October 12, 2011 - link
AMD Exec a year ago: "We about ready to release BD?"AMD Engineer: "Soon. At 4ghz, we're actually slower per thread and using double the power than Phenom at 3.4ghz, but we'll get there..."
AMD Exec: /gquit
lyeoh - Wednesday, October 12, 2011 - link
Bulldozer reminds me of the P4/Prescott for some reason ;).High clock, high watts, but not enough performance.
Might be faster in parallelizable tasks but most people with such tasks would just buy more computers and build large clusters.
Iketh - Wednesday, October 12, 2011 - link
The processors are popping up on Newegg now... the 8120 for $220 and 6100 for $190vol7ron - Wednesday, October 12, 2011 - link
Sigh... I made the mistake in buying a Prescott. Not to mention I bought an "E" batch, which ran even hotter and weren't as overclockable.actionjksn - Wednesday, October 12, 2011 - link
Yeah I had one of those hot potato's too. Back then we thought Intel was finished.just4U - Thursday, October 13, 2011 - link
ckryan, you stated you were blown away by the 2500K yes? It's odd you know.. I've owned a PII 920, PII 1055, PII 955 (tested lots of lowbie $60-80 parts from AMD to) .. also used a i7 920, i7 955 i5 2500k i7 2600k (my most recent one) and .. I am not blown away by any of them..Last time I was blown away by a cpu was the Q6600..(before that the A64 3200+) since then other cpu's have been better but not so much so that I'd say that it was night and day differences.
CeriseCogburn - Wednesday, March 21, 2012 - link
Ok that was some enormously skilled twisting and spinning. BD is an epic failure, period. I can't envision anyone with any needs, need, or combo thereof choosing it.\It's so bad amd lied about it's transistor count.
Forget it, it's an epic fail and never anything more.
jiffylube1024 - Wednesday, October 12, 2011 - link
Ugh, BD is quite the disappointment. The power consumption is absolutely through the roof -- unacceptable for 32nm, really!With that said, I am very intrigued in the FX-4100 4-core 3.6GHz part. This should be the replacement for the Athlon II 2-4 core series, and I'm very interested to see how it does vs ~3 GHz Athlon II X2's, X3's and X4's.
yankeeDDL - Wednesday, October 12, 2011 - link
Wow ...I'm blown away.
I have been waiting for BD's reviews and benchmarks for months. I have waited for BD for my new rig.
I have used AMD for the past 8 years and I am ... was convinced that it always offered, by far, the best price/performance ratio for entry-level, mid range PCs.
I am a still a big fan of AMD ... but I have to stand corrected. BD is a POS. Longer pipelines? Didn't they learn anything from Pentium 3/4 debacle?
A Phenom II X6 is almost always better than BD, even in power consumption. Come on: if BD had come out shortly after the Phenom I could see it as an incremental improvement, a new baseline to build upon. But it took AMD years to come out with BD ... and this is the result? Disappointing.
I mean, betting everything on higher clock frequencies? At 4GHz? It's no wonder that Intel's IPC improvements are crunching BD: IPC is all about doing more with the same power, clock speed is all about throwing more power to do the same faster ...
Boy. This ruined my day.
yankeeDDL - Wednesday, October 12, 2011 - link
By the way, no matter how AMD slices it, I see the FX-8* as a 4-core CPU. A glorified ohene, but still a 4-core.If I was AMD, I would have considered a fair goal to obliterate the i5-2500 performance with the new FX-8 family, instead it comes short most of the times.
What were they thinking?
B3an - Wednesday, October 12, 2011 - link
Yep this really is extremely disappointing. I'm actually going to call this AMD's Pentium 4. Thats how bad this is.2 billion transistors - thats a massive increase over the Phenom II X6 and what do we get? Nothing. The Phenom II is atleast as good with WAY less transistors and lower power consumption under load. I'm pretty shocked at how bad Bulldozer is. I wasn't expecting performance clock for clock to be as good as Nehalem, let alone Sandy Bridge, but this is just... appalling. When Ivy Bridge is out the performance difference is going to be MASSIVE.
Intel are surely going to implement more restrictions and hold there clocks speeds back even further. Theres just no competition anymore. Sad day for consumers.
bennyg - Wednesday, October 12, 2011 - link
AMD's Prescott to be exact... ironically that's one thing they seemed to shoot for in deepening pipeline and hoping that process would be better... hopefully this is just immature and soon there will be a GF110-style refresh that does it properly...Otherwise the whole next gen of AMD CPUs will continue to fight for scraps at the bottom of the heap... and their laptop CPUs will not even succeed there.
TekDemon - Wednesday, October 12, 2011 - link
I don't even know if it's just the process since those power consumption figures seem to suggest that they're being limited by the sheer amount of power it's using and the heat being generated from that. Intel had planned to take the P4 to 10Ghz but the fact that it was a power hog prevented that from realistically happening and it seems like you have the same issue here. The clockspeed potential is clearly there since it can hit 7Ghz under liquid nitrogen but for a normal air heatsink setup this is a recipe for failure. It's just way too power hungry and not fast enough to justify it. Why would anybody choose to use an extra 100 watts for largely the same or worse performance vs an i5 2500K?Thermalzeal - Wednesday, October 12, 2011 - link
I agree, 2 billion transistors are doing what exactly?The worst thing is that the water cooler isn't included with the FX-8150. At the performance levels they are providing, they should have just upped the price 30-50 bucks and provided the cooler gratis. Who's gonna need an AMD branded cooler if their not going to buy bulldozer?
The other point of these review is that there is no availability of any of the parts. So what a wonderful paper launch we have here. Seems like AMD isn't betting on anyone being interested enough to buy one of these things.
Blasphemy.
eanazag - Wednesday, October 12, 2011 - link
You can find them on Newegg today. The price is jacked up though. Newegg must not read AT.jleach1 - Friday, October 21, 2011 - link
Sigh...it's quite sad. There must be actual people buying these...either that or the supply is terrible. Because there's no way in hell i'd pay those prices for an AMD processor.defacer - Wednesday, October 12, 2011 - link
Like most people here, I 'm disappointed with BD performance -- even though I have never owned an AMD CPU after my 386DX/40 myself, competition in the performance segment would be nice for a change.I won't argue against "it's not 8 core", but calling it a 4-core is IMHO just as inappropriate (if not more).
yankeeDDL - Wednesday, October 12, 2011 - link
Ok, how about 4 modules, with 8 integer EU, 4 fetch, 4 decode, 4 L2 caches ...Point being, they are 4 modules, not 8 cores, and from many aspects, they are more similar to a 4-core CPU than to an 8-core CPU, being neither one (somewhere in between).
The fact of the matter remains: the IPC is bad. In multi-threaded, Integer-intensive tasks, BD should crunch the PhenomII X6 (2 more cores, higher clock speed), but it seems you can hardly see the difference. (ref: Excel 2007 SP! MonteCarlo sims).
AMD now is left with Llano as the only compelling reason to buy AMD over Intel (for netbooks and small notebooks, where Atom is the contender).
Against Core, either the FX-8150 goes down to $200 or less, or the i5-2500 is just a better buy for the money.
The advantage is I don't need a new MoBo (huge advantage for me, but not very compelling, in general).
yankeeDDL - Wednesday, October 12, 2011 - link
Forgot to mention, regarding the integer-intensive test: the core-i5 is slower by about 9% slower with 9% slower clock, but only 4 execution units (8 logical, with hyperthreading, but hyperthreading should be nearly irrelevant in this test).What a blow.
Ratman6161 - Wednesday, October 12, 2011 - link
We can argue about weather its really a 4 core or an 8 core, and the argument is interesting from a technical standpoint. But the proof is in the real world benchmarks. From a practical standpoint, if the benchmarks are not there (and they aren't) then the rest really doesn't matter.I looked on Microcenter where you can get a 2600K for $279 and a 2500K for $179. An i5-2400 is only $149. So AMD is going to be right back to having to cut prices and have its top end CPU go up against $149 - $179 Intel parts. Worse yet, it will, at least initially, be competing against its own previous generation parts.
There is one point of interest though and that is the fact that all the FX's are unlocked (according to the story). So it's pretty likely that an FX 8100 will probably overclock about as high as an 8150 once the process is mature. But there again, among overclockers, AMD could find its highest end 8150 competing against its own lower priced 8100.
Back in the day, I loved my Athlon 64's and 64 x2's and even though I have switched to an Intel Q6600 and then a 2600K, I still really want AMD to succeed...but its not looking good.
TekDemon - Wednesday, October 12, 2011 - link
Yeah I paid $179 for my i5 2500K and it hums along at 4.8Ghz (can hit 5Ghz+ but I wanted to keep the voltages reasonable). Clock for clock bulldozer is slower since it's only competitive when the higher clocked part is compared to a stock 2500K.jleach1 - Friday, October 21, 2011 - link
Their cores offer, what 75% the speed of a normal core?The fact is, this supposed "8" core processor performs worse than AMDs own 6 core processor. There's no way we can get away with calling it an 8 OR a 6 core.
For all intents and purposes, it's a quad core.
estarkey7 - Wednesday, October 12, 2011 - link
You took the words right out of my mouth! I am a big AMD fanboy, and I was waiting with baited breath to jump on the bulldozer bandwagon for my next rig (and I probably still will). But this is ridiculous! I'm a computer engineer and where the hell were the simulations AMD? Seems like you could have halved the L3 and kept in the extra FP resources and been better than what you are doing now.Also, don't bitch about that Windows 7 doesn't realize the architecture of Bulldozer, you knew that 18 months ago, so you should have been writing a patch so that would have been a non issue.
The absolutely, positively only reason i will by an 8150-FX is that my current desktop is a dual core Athlon running at 2.2GHz. So to me, the performance increase over my current desktop would be massive. But on second thought, if I have stuck with such a slow system this long, I might another 3-5 months for Piledriver.
Taft12 - Wednesday, October 12, 2011 - link
<i>The power consumption is absolutely through the roof -- unacceptable for 32nm, really!</i>Uhh, you did see the bar graph for idle power usage, right? And keep in mind this is an 8-core CPU compared to 4- and 6-core competitors.
Like you, I'm also very interested in the 4- and 6-core Bulldozers. Anand let us down by only reviewing the flagship Llano. Hopefully he doesn't do the same with Bulldozer.
Tom Womack - Wednesday, October 12, 2011 - link
Yes, the idle power is significantly worse than either of the Sandy Bridge platforms he's comparing it toJasperJanssen - Wednesday, October 12, 2011 - link
What Anand reviews is mostly down to what AMD will let him have -- even sites the size of Anandtech don't simply get to call and order parts from a catalogue for review samples.Taft12 - Wednesday, October 12, 2011 - link
AMD doesn't have much control over "review samples" that can be purchased at retail, as you can do with the A4-3300 et al. for weeks nowenterco - Wednesday, October 12, 2011 - link
I read that 'at 1920x1200/1080 the gaming performance depends much mure on the GPU. Anyway, I'm happy with my i5-2500k ;-), Bulldozer does not seem to worth the wait.ninjaquick - Wednesday, October 12, 2011 - link
Blame shitty game developers.AssBall - Wednesday, October 12, 2011 - link
Kinda what I was thinking. When they are all developing games for a 6 year old 3 core PowerPC system with 512MB RAM (xbox) instead of a computer, its no bloody wonder.THizzle7XU - Wednesday, October 12, 2011 - link
Well, why would you target the variable PC segment when you can program for a well established, large user-base platform with a single configuration and make a ton more money with probably far less QA work since there's only one set (two for multi-platform PS3 games) of hardware to test?And it's not like 360/PS3 games suddenly look like crap 5-6 years into their cycles. Think about how good PS2 games looked 7 years into that system's life cycle (God of War 2). Devs are just now getting the most of of the hardware. It's a great time to be playing games on 360/PS3 (and PC!).
GatorLord - Wednesday, October 12, 2011 - link
Consider what AMD is and what AMD isn't and where computing is headed and this chip is really beginning to make sense. While these benches seem frustrating to those of us on a desktop today I think a slightly deeper dive shows that there is a whole world of hope here...with these chips, not something later.I dug into the deal with Cray and Oak Ridge, and Cray is selling ORNL massively powerful computers (think petaflops) using Bulldozer CPUs controlling Nvidia Tesla GPUs which perform the bulk of the processing. The GPUs do vastly more and faster FPU calculations and the CPU is vastly better at dishing out the grunt work and processing the results for use by humans or software or other hardware. This is the future of High Performance Computing, today, but on a government scale. OK, so what? I'm a client user.
Here's what: AMD is actually best at making GPUs...no question. They have been in the GPGPU space as long as Nvidia...except the AMD engineers can collaborate on both CPU and GPU projects simultaneously without a bunch of awkward NDAs and antitrust BS getting in the way. That means that while they obviously can turn humble server chips into supercomputers by harnessing the many cores on a graphics card, how much more than we've seen is possible on our lowly desktops when this rebranded server chip enslaves the Ferraris on the PCI bus next door...the GPUs.
I get it...it makes perfect sense now. Don't waste real estate on FPU dies when the one's next door are hundreds or thousands of times better and faster too. This is not the beginning of the end of AMD, but the end of the beginning (to shamlessely quote Churchill). Now all that cryptic talk about a supercomputer in your tablet makes sense...think Llano with a so-so CPU and a big GPU on the same die with some code tweaks to schedule the GPU as a massive FPU and the picture starts taking shape.
Now imagine a full blown server chip (BD) harnessing full blown GPUs...Radeon 6XXX or 7XXX and we are talking about performance improvements in the orders of magnitude, not percentage points. Is AMD crazy? I'm thinking crazy like a fox.
Oh..as a disclaimer, while I'm long AMD...I'm just an enthusiast like the rest of you and not a shill...I want both companies to make fast chips that I can use to do Monte Carlos and linear regressions...it just looks like AMD seems to have figured out how to play the hand they're holding for change...here's to the future for us all.
Menoetios - Wednesday, October 12, 2011 - link
I think you bring up a very good point here. This chip looks like it's designed to be very closely paired with a highly programmable GPU, which is where the GPU roadmaps are leading over the next year and a half. While the apples-to-apples nature of this review draw a disappointing picture, I'm very curious how AMD's "Fusion" products next year will look, as the various compute elements of the CPU and GPU become more tightly integrated. Bulldozer appears to fit perfectly in an ecosystem that we don't quite have yet.GatorLord - Wednesday, October 12, 2011 - link
Exactly. Ecosystem...I like it. This is what it must feel like to pick up a flashlight at the entrance to the tunnel when all you're used to is clubs and torches. Until you find the switch, it just seems worse at either...then viola!actionjksn - Wednesday, October 12, 2011 - link
Wow I hope that made you feel better about the crappy chip also known a "Man With A Shovel"I was just hoping AMD would quit forcing Intel to have to keep on crippling their chips, just to keep them from putting AMD out of business. AMD better fix this abortion quick, this is getting old.
GatorLord - Thursday, October 13, 2011 - link
Feeling fine. Not as good in the short run, but feeling better about the long run. Unfortunately, due to constraints, it takes AMD too long to get stuff dialed in and by the time they do, Intel has already made an end run and beat them to the punch.Intel can do that, they're 40x as big as AMD. Actually, and this may sound crazy until you digest it, the smartest thing Intel could do is spin off a couple of really good dev labs as competitors. Relying on AMD to drive your competition is risky in that AMD may not be able to innovate fast enough to push Intel where it could be if they had more and better sharks in the water nipping at their tails.
You really need eight or more highly capable, highly aggressive competitors to create a fully functioning market free of monopolistic and oligopolistic sluggishness and BS hand signalling between them. This space is too capital intensive for that at the time being with the current chip making technology what it is.
yankeeDDL - Wednesday, October 12, 2011 - link
Just to be the devil's advocate ...The launch event in London sported 2 PC, side by side, running Cinebench.
One had the core i5-2500k, the other the FX8150.
Of course, these systems are prepared by AMD, so the results from Anand are clearly more reliable (at least all the conditions are documented).
Nevertheless, it is clear that in the demo from AMD, the FX runs faster. Not by a lot, but it is clearly faster than the i5.
Video: http://www.viddler.com/explore/engadget/videos/335...
Even so, assuming that this was a valid datapoint, things won't change too much: the i5-2500k is cheaper and (would be) slightly slower than the FX8150 in the most heavily threaded benchmark. But it would be slightly better than Anand's results show.
KamikaZeeFu - Wednesday, October 12, 2011 - link
"Nevertheless, it is clear that in the demo from AMD, the FX runs faster. Not by a lot, but it is clearly faster than the i5."Check the review, cinebench r11.5 multithreaded chart.
Anand's numbers mirror the ones by AMD. Multithreaded workloads are the only case where the 8150 will outperform an i5 2500k because it can process twice the amount of threads.
Really disappointed in AMD here, but I expected subpar performance because it was eerily quiet about the FX line as far as performance went.
Desktop BD is a full failure, they were aiming for high clock speeds and made sacrifices, but still failed their objective. By the time their process is mature and 4 GHz dozers hit the channel, Ivy bridge will be out.
As far as server performance goes, not even sure they will succeed there.
As seen in the review, clock for clock performance isn't up compared to the prvious generation, and in some cases it's actually slower. Considering that servers run at lower clocks in the first place, I don't see BD being any threat to intels server lineup.
4 years to develop this chip, and their motto seemed to be "we'll do netburst but in not-fail"
medi01 - Wednesday, October 12, 2011 - link
So CPU is a bottleneck in your games eh?TekDemon - Wednesday, October 12, 2011 - link
It's not but people don't buy CPUs for today's games, generally you want your system to be future proof so the more extra headroom there is in these CPU benchmarks the better it holds up over the long term. Look back at CPU benchmarks from 3-4 years ago and you'll see that the CPUs that barely passed muster back then easily bottleneck you whereas CPUs that had extra headroom are still usable for gaming. For example the Core 2 Duo E8400 or E8500 is still a very capable gaming CPU, especially when given a mild overclock and frankly in games that only use a few threads (like Starcraft 2) it gives Bulldozer a run for the money.I'm not a fanboy either way since I own that E8400 as well as a Phenom II (unlocked to X4, OC'ed to 3.9Ghz) and a i5 2500K but if I was building a new system I sure as heck would want extra headroom for future-proofing.
That said? Of course these chips will be more than enough power for general use. They're just not going to be good for high end systems. But in a general use situation the problem is that the power consumption is just crappy compared to the intel solutions, even if you can argue that it's more than enough power for most people why would you want to use more electricity?
Mishera - Monday, October 17, 2011 - link
Good points TekDemon. But I'll add that from what I understand, the GPU might be capable of processing huge amounts of graphic information, but might have to wait for the CPU to process certain information before it's able to continue, hence some games going only so high in graphic tests no matter what kind of GPU is put in.Like he said, buying a good CPU will last longer than spending that money on a really good GPU. I personally try to build a balanced system since by the time I upgrade it's a pretty big jump on all ends.
Snorkels - Wednesday, October 12, 2011 - link
This and other benchmark tests are BOGUS. You are comparing apples to oranges. LiarMarks..This test shows non-optimized code for AMD vs optimized code for the Intel CPU.
It does not show the actual performance of the Bulldozer CPU.
Most software companies compile their software using Intels compiler, which creates crappy and unefficient codepaths for AMD processors.
Compile with Open64 compiler and you get a totally different result.
actionjksn - Wednesday, October 12, 2011 - link
@Snorkels If most software company's are using an Intel compiler, why would you want an AMD processor that can't utilize it properly.g101 - Wednesday, October 12, 2011 - link
Well, I'm happy that gamer children cannot understand the point of this architecture. You obviously have no concept of the architectural advantages, since it's not designed for game-playing children or completely unoptimized synthetic benchmarks.Bulldozer optimized and future-proofed for professional software, rather than entertainment software for children.
AssBall - Wednesday, October 12, 2011 - link
"obviously have no concept of the architectural advantages"Enlighten us then, oh wise one.
guyjones - Sunday, October 16, 2011 - link
Who exactly is the child here? Your infantile comment conveniently ignores the fact that AMD has made gigantic marketing pushes that are clearly directed at the gamer community, not to mention gaming-related sponsorship activities and marketing tie-ins. So, on the contrary, the company has made very visible and consciously-directed efforts to appeal to gamers with its products. It is totally unreasonable to now posit that BD is not directed at least in part toward that market segment.Will Robinson - Wednesday, October 12, 2011 - link
FailDozer....pretty limp performance numbers.Intel still rules.
etrigan420 - Wednesday, October 12, 2011 - link
What an unfortunate series of events...maybe my e8400 will hold out a little longer...Malih - Wednesday, October 12, 2011 - link
before reading (and i've read all other review sites -and disappointed at AMD-, just dying to see your view on the matter) thanks for the review as alwaysxorbit - Wednesday, October 12, 2011 - link
You are not measuring "branch prediction" performance. You are measuring misspeculation penalty (due to longer pipeline or other reasons). Nothing can "predict" random data-dependent branches.JumpingJack - Sunday, November 6, 2011 - link
This is a good point.mianmian - Wednesday, October 12, 2011 - link
How disappointed I am. I can't believe what AMD will claim later on.Marburg U - Wednesday, October 12, 2011 - link
Cannot see a reason to wait for Piledriver. Am3+ won't survive that chip, and +15%, even in single thread, won't be enough (for Sandy, I'm not even talking about Ivy).If BD had not been so bad i would have hoped in a price drop of the Thuban, and would have gone for it. But now, i fear price spikes of the old Phenom II X6 as it approaches it's EOL.
Ethaniel - Wednesday, October 12, 2011 - link
... using a chainsaw. Newegg sells a 2500k for USD 220. I'm thinking something like 170-180 for the FX-8150. I was expecting a lot from the FX line. And I think that was my mistake, probably. Too bad.Leyawiin - Wednesday, October 12, 2011 - link
I guess we can take comfort in that some things never change - naming AMD processors are always behind the curve (since before Intel's C2 Duo). Guess I'll hang onto my X4 955 @ 3.6 Ghz for a while longer. It'll be the last AMD processor I'll bother with (and I'm tired of being faithful and waiting on them).richard77aus - Wednesday, October 12, 2011 - link
""At the same clock speed, Phenom II is almost 7% faster per core than Bulldozer according to our Cinebench results.""I am far from being an expert in CPUs but isn't the main advantage intel has had since core2- sandybridge the per core performance? not closk speed and not multi core.
I've seen some benchmarks showing real world usage of the SB i3 dual core where it out performs a faster clocked quad core phenom 2.
richard77aus - Wednesday, October 12, 2011 - link
Meaning AMD giving first priority to clockspeed and core count was the wrong thing to aim for even if they had achieved a 4ghz+ stock 8 core speed processor, but to actually go backwards compared to such an old arch. is a disaster. (my first post here, is there a way to edit posts?)Kristian Vättö - Wednesday, October 12, 2011 - link
The thing is that Phenom II, which is AMD's arch, is FASTER clock for clock than their new Bulldozer arch. Intel is far ahead of both CPUs, but it's a bit laughable that AMD's older CPUs actually outperform their new ones.Saxie81 - Wednesday, October 12, 2011 - link
Hey Anand, did you happen to get the power consumption numbers when you hit 4.7ghz?This is... disappointing. I knew the Single thread benchmarks were going to be bad, but you need to be running something thats needing the 8 cores, if not its of no use. Kinda like using a Magny Cours to run Crysis.
Anand Lal Shimpi - Wednesday, October 12, 2011 - link
I'm going to be doing some more overclocking tomorrow, but I broke 300W at 4.7GHz :-/Saxie81 - Wednesday, October 12, 2011 - link
Ouch.... Not looking good. :SThanks for the reply, again great review!!
velis - Wednesday, October 12, 2011 - link
Ignoring the power consumption it seems to me that @4.6GHz it should start being quite competitive.So can we expect base clocks to rise once significant volume of these chips starts getting out and GloFo refines the process?
I also must admit I didn't expect 2 bn transistors. All the time AMD was bragging about how much they saved and then we get this behemoth. No wonder they have process issues. Such big chips always do.
cfaalm - Wednesday, October 12, 2011 - link
Well it is an 8-core, not a 4 core. 2x 995M (Sandybridge 4C) almost 2B, though I am sure the multply isn't exactly correct. A lot of it depens on the L3/L2 RAM amounts. The savings seem to be minimal.I am still confused about why they so deliberately chose to go with a relatively low single thread performance. My main application is multithreaded, but since it's such a mixed bag overall I am pretty unsure if this will be my next CPU, unless I get to see convincing Cubase 6 benchies. For an FX moniker it needs to perform better than this anyway.
I'll throw in a lyric from The Fixx
"It doesn't mean much now, it's built for the future."
TekDemon - Wednesday, October 12, 2011 - link
Wow, no wonder they say you need water cooling or better to go 5Ghz+.enterco - Wednesday, October 12, 2011 - link
AMD should send a developer team to CryTek to help them release a patch able to use more cores :)medi01 - Wednesday, October 12, 2011 - link
Uhm, what about other numbers?IlllI - Wednesday, October 12, 2011 - link
this might be the final nail in the coffin. We might have to wait longer for it to be competitive? People have literally been waiting for -years- for amd to catch up.probably by the time piledriver(or whatever it'll be called) comes out, ib will be out (and even further behind intel)
btw I think tomshardware tested it with windows 8 and it was still a turd.
I seriously hope you can get some answers/reasons why amd released such a woeful product. Maybe this was why dirk was fired? All I know is after 7+ years of amd, my next processor will be intel
Ushio01 - Wednesday, October 12, 2011 - link
Desktop CPU's are Halo parts and as such are irrelevant. It's the Server and OEM Laptop CPU's were AMD needs to perform and AMD's server share just keeps dropping.lyeoh - Wednesday, October 12, 2011 - link
Thing is I wouldn't want to use them in my servers: http://us.generation-nt.com/answer/patch-x86-amd-c...FWIW when the Athlon64s first came out, we bought a bunch of them, they were not bad, but there were clock issues - the TSCs weren't synchronized. So had to set idle=poll (and thus using more watts).
You can blame the OS developers, but most people buy new hardware to run existing operating systems and programs on, not future unreleased ones.
It sure is looking bad for them. I won't be buying AMD CPUs but I hope the fanboys keep them alive ;).
OCedHrt - Wednesday, October 12, 2011 - link
"Other than the 8150, only the quad-core FX processors are able to exceed the 3.3GHz clock speed of the Phenom II X6 1100T."The 6 core FX is also clocked higher?
Ryan Smith - Wednesday, October 12, 2011 - link
Good point. Fixed.Marburg U - Wednesday, October 12, 2011 - link
they have a bloating cache with something wrong insidehttp://www.xbitlabs.com/images/cpu/amd-fx-8150/t5....
npp - Wednesday, October 12, 2011 - link
Sun went an even more extreme route regarding FP performance on its Niagara CPUs - as far as I remember, the first generation chip had a single FPU shared across eight cores. Performance was not even close to a dual-core Core 2 Duo at that time. So that was what I though when I first read about the "module" approach in Bulldozer maybe an year ago - man, this must be geared towards server workloads primary, it will suffer on the desktop. I guess FPU count = core count would have be more appropriate for the FX line.hasu - Wednesday, October 12, 2011 - link
Would this be a good candidate for web server applications because of its excellent multi-threaded performance? How about to host a bunch of Virtual Machines?sep332 - Wednesday, October 12, 2011 - link
I've also been wondering if running a lot of VMs would work better on this CPU. But I don't really know how you'd benchmark that kind of thing. Time and total energy consumption to serve 20,000 web pages from 12 VMs?magnetik - Wednesday, October 12, 2011 - link
I've been waiting for this moment for months and months.Reading the whole thing now...
themossie - Wednesday, October 12, 2011 - link
This processor is worse than the Phenom II X6 for most of my workloads. My next machine will be Sandy/Ivy Bridge.But... we haven't seen this clock ramp up yet. As Anand mentions on page 3 - Remember the initial Pentium 4s? The Williamette 1.4 and 1.5 ghz processors were clearly worse than the competition, to say nothing of the PIII line. In time the P4 consistently beat the much higher IPC AMD processors on most workloads, especially after introducing Hyper-threading. This really does feel like a new Pentium IV! Trying a design based on clock speed and one-upping Intel's hyperthreading by calling 4 '1.5' cores 8 (we hyperthread your hyperthreading!) - it will be a wild ride.
At this point, I don't see anyone beating Intel at process shrink and they're a moving target. But competitive pricing, quick ramp up and a few large server wins can still save the day. Dream of crazy clockspeeds :-)
themossie - Wednesday, October 12, 2011 - link
Upon further reflection...- Expect to see Bulldozer targeted towards servers and consumers who think "8 cores" sounds sexy, at least until clockspeed ramps up.
- Processor performance is not the limiting factor for most consumer applications. AMD will push APUs very heavily, something they can beat Intel at. Piledriver should drive a good price/performance bargain for OEMs, and for laptops may have idle power consumption in shouting distance of Sandy Bridge.
I'm more optimistic about AMD now. But my next machine will still be Sandy Bridge / Ivy Bridge.
wolfman3k5 - Wednesday, October 12, 2011 - link
I see people that say that they'll be waiting for Piledriver. Why not wait for AMD Drillbit, or AMD Dremel? How about AMD Screwdriver or AMD Nailpuller? Tomorrow my 2600K arrives. I'm done. I had a build ready with a ASUS 990FX ready for Bulldozer, but I will "bulldoze" the part back to NewEgg.I must admit, I was worried when I saw the large amounts of L2 cache before the launch. AMD engineers must have been taking the summer off, and decided to throw more cache at the problem. AMD needs a new engineering team. Why the hell can Intel get it right and they can't?
AMD, your CPU engineers are lazy and incompetent. I mean, it only took you "only" four years to get your own version of the Pentium 4.
The bottom line is that its time to fire your lazy retarded and incompetent engineers, and scout for some talent. That's what every other company does that wants to succeed, regardless of the industry. I mean, look at KIA and Hyundai for example, they went out and hired the best designers from Audi and the best engineers they could buy with money. Throw some more money at the problem AMD and solve your problems. And if those lazy fat fucks in Texas that you call engineers don't deliver, look somewhere else. Israel or Russia maybe? Who knows... Just my 2 cents.
IKeelU - Wednesday, October 12, 2011 - link
I know nothing of AMD employee's work ethic, but...their problems may have nothing to do with raw technical talent. But you are right about one thing - throwing money at a problem can be helpful, and that's likely why Intel has succeeded for so long. Intel has a lot of cash, and a lot of assets (such as equipment). They can afford the best design/debugging tools (whether they buy'em or make'em), which makes it much easier to develop a top product given the same amount of microchip engineering talent.And just because they're based in Texas doesn't mean their staff is all-American. Like most US tech firms, quite a bit of their talent was probably imported.
actionjksn - Thursday, October 13, 2011 - link
AMD Nailpuller? That was some funny shit right there HA HA HASpam not Spam - Wednesday, October 12, 2011 - link
Just skimmed the review; not as awesome as I had hoped for, sadly. That being said, I'm thinking it might well be a nice improvement for the stock, C2D Q6600 in my Dell. I could go Intel, obviously, but... I dunno. I've got an odd fascination with novel things, even if they are rough to begin with. Hell, I've even got a WP7 phone :pwolfman3k5 - Wednesday, October 12, 2011 - link
Then make sure to get a quality power supply and motherboard to go with it. Also, your power bill will increase, but not directly from the Bulldozer CPU, nope, but from all the heat that it will make... you will need to run your air conditioner which is a power hog./* Patiently waiting for AMD's next gen architecture codenamed "Bendover" */
ckryan - Wednesday, October 12, 2011 - link
Is 'ScrewdOver' next on the roadmap after 'Bendover'? I'll have to look in the official AMD leaked slide repository.I still think some intrepid AMD faithful will try BD out just because they're wired that way, and many of the are going to like it. I bet it compares better to Lynnfield than Sandy Bridge... Except Ivy Bridge is closer in the future than SB's launch is in the past. This could be an interesting and relevant product after a few years, but the need is dire now. AMD is going to kill off the Phenom II as fast as possible.
themossie - Wednesday, October 12, 2011 - link
Bendover -> ScrewdOver -> Screwdriver (I'll bring the OJ) -> Piledriver.Courtesy of numerous internal leaks at AMD.
themossie - Wednesday, October 12, 2011 - link
My apologies, didn't realize Piledriver was real.bill4 - Wednesday, October 12, 2011 - link
AKA, the reason all of us who are commenting are reading this review. Gaming performance. And AMD chose not to even compete there. Bunch of monkey overs at AMD CPU engineering?It's now a non starter in the enthusiast market.
I've often though recently that AMD (or any manufacturer really, but AMD as a niche filler would be a more obvious choice given their market position) would do well to try to position itself as the gamers choice, and even design it's CPU's to excel in gaming at the expense of some other things at times. I really suspect this strategy would lead to a sales bonanza. Because really the one area consumers crave high performance is pretty much, only gaming. It's the one reason you actually want a really high performance CPU (provided you dont do some sort of specialized audio/video work), instead of just "good enough" which is fine for general purpose desktoping.
Instead they do the exact opposite with Bulldozer, facepalm. Bulldozer is objectively awful in gaming. Single handedly nobody who posts at any type of gaming or gaming related forum will ever buy one of these. Unbelievable.
Perhaps making it even more stinging is there was some pre-NDA lift supposed reviewer quote floating around at about how "Bulldozer will be the choice for gamers" or something like that. And naturally everybody got excited because, that's all most people care about.
Combine that with the fact it's much bigger and hotter than Intel's, it's almost a unmitigated disaster.
This throws AMD's whole future into question since apparently their future is based on this dog of a chip, and even makes me wonder how long before AMD's engineers corrupt the ATI wing and bring the GPU side to disaster? The ONLY positive thing to come out of it is that at least AMD is promising yearly improvements, key word promising. Even then absolute best case scenario is that they slowly fix this dog in stages, since it's clearly a broken architecture. And that's best case, and assumes they will even meet their schedule.
Anand lays so much of the blame at clockspeed, hinting AMD wanted much more. But even say, 4.3 ghz Bulldozer, would STILL be a dog in all important gaming, so there's little hope.
shompa - Wednesday, October 12, 2011 - link
I have used many AMD systems. Have deployed 1000 of AMD CPU inside Unix workstations at my old work. I cheer for AMD.But.
AMD is going to have a hard time ahead. Selling its fabs to Global foundries was the biggest mistake of them all.
We are in the post PC world. If Tablets are computers: 2012 20% of PCs will use ARM. This is many lost CPU sales for AMD/Intel.
I predict that AMD will be gone within 3 years. Maybe someone buys them? After the settlement with Intel, AMD now can transfer its X86 license to the next buyer. (pending Intels approval)
Maybe Google could buy AMD and build complete computers ?
wolfman3k5 - Wednesday, October 12, 2011 - link
I see two things that might happen to AMD1) They will transform in to a GPU manufacturer completely (and of course they will make those silly APUs)
2) If that damn x86 license is transferable, they could merge with NVidia. Neither of these two companies looks to hot these days, so they might as well work together.
philosofool - Thursday, October 13, 2011 - link
We may be in the "post PC" era, but don't count x86 out. Recent studies indicate there's a corollary to Moore's law that applies to compute power per watt; the study goes back to 1961. This suggests that x86 is only a few years away from running on mobile devices, which is what MS and Intel are betting on. And frankly, it makes sense. Ultimately, I don't want two different things (a mobile device and a PC), I want a PC in my pocket and one on my desk.dingetje - Wednesday, October 12, 2011 - link
i agree with some that bulldozer is more like faildozer, but...let's keep supporting amd so the one getting piledrive'd in the naughty place will not be you when intel has zero competition left because you did not want to spend a little more for a little less....and let's be honest, it IS just a little.
if enough ppl drop amd, in the end WE will be the one paying for amd's lack of support.
at least amd is trying.....the question is, what are YOU going to do to stop intel becoming your bunghole-piledriving overlord?
wolfman3k5 - Wednesday, October 12, 2011 - link
Supporting incompetence is like socialism (or even communism). Eventually those that are supported will sit around like dogs all day and do nothing but lick their hairy balls...dingetje - Wednesday, October 12, 2011 - link
ah...someone has been brainwashed by watching to much fox news.communism baaaad boogabooga!! ....duhhhhh lol roflmao
sure, capitalism works...however, it only works when there actually IS competition.
i wish your (most likely already loose) rectum good luck.
wolfman3k5 - Wednesday, October 12, 2011 - link
Apparently money won't motivate the Monkey Engineers at AMD, so maybe making fun of them will. I mean, where is their pride, right?By the way, I've seen real socialism, so I have a clue what it is. And it is what I just described. I don't like Intel because they are not healthy for our economy, yet their only competition just pulled a gigantic fuck-up.
dingetje - Wednesday, October 12, 2011 - link
oooooo oooga boooga socialism is bad....it take away aaalll you money...it verrry baddd.....oooooogabooogaboooo!! LOLhave fun getting eaten alive by china after your capitalistic model became cancerous and will die from the inside out.
your country is bought and paid for and will be eaten alive by the "communistic" chinese who are in fact just the same as what the usa has become: a corporate dictatorship (not communism and certainly not socialism).
sorry, i didnt mean to scare you more than you obviously already are.
i would send you some lube to easy the pain, but i'm all out ;)
UberApfel - Wednesday, October 12, 2011 - link
My god you're all so retarded...Dingetje; China has serious issues when it comes to the welfare of their people. China only owns 10% of our debt, and that is thanks to China becoming capitalistic as a nation.
Wolfman; Bulldozer is a server procressor. The server market is where the money is especially with the cloud and enthusiast-class desktops becoming rare. Intel has 30X AMD's market capital... they can afford to target multiple markets. AMD can't.
Bulldozer is superior with integer processing in both performance-per-core and performance-per-watt. Of course; I do wonder why desktop applications even need floating point... (numbers < -2^63 or > 2^63)
hasu - Wednesday, October 12, 2011 - link
Like wise... killing or trying to control competition is also communism.radium69 - Wednesday, October 12, 2011 - link
Jeebus, that power consumption is going through the roof!Also there were some rumors that it would go up to 8Ghz, I wonder if would use a Kw by then...
I want to see how they compare to each other when overclocked to 4,5 or more or less.
Also Anand, can you do a efficiency test? Various overclocking speeds and bench these while monitoring the power consumption. Might make an interesting article :)
ypsylon - Wednesday, October 12, 2011 - link
Not really - even including AMD fanboys. AMD can't understand that to move forward you must abolish old stuff for good. Brand new and spanking Bulldozer has it roots in ancient K6. Do something new for crying out loud or get lost and stop wasting time. Don't release CPUs just for the sake of offering something. That is not the point of CPU market. Even Intel can shoot themselves in the foot with X79. Looks like it will be similar failure to FailDozer. Nobody will invest in entirely new platform for 10 maybe 15% performance boost over X58 which is the new 775 socket. Long live the S1366! Plenty of life and fuel left in Nehalems, plenty... If you wanted to buy Bulldozer then go and buy X58 platform. After nearly 4 years on the market it is [somewhat ;)] dirt cheap.Anand one thing: I find it puzzling that you reckon that Bulldozer will do well in server environments. With that kind of performance/Watt and inefficient power management? No chance in hell. i7/Xenons will eat FailDozers for breakfast.
wolfman3k5 - Wednesday, October 12, 2011 - link
I'm not. I completely agree with everything that you've said.And, if I might add: Dear AMD, and dear AMD engineers (and lazy fucks that you are), throwing more cache at an already inefficient architecture is not going to solve your problem. Add to that that you people (yes, you AMD people) are calling a 4 Core CPU an 8 Core because you've added another Integer Unit to each core. WTF?! That's almost like calling a quad core Intel 2600K and 8 Core CPU because it has Hyper Threading.
I have been an avid AMD supporter since 1996. I have spent many thousands of dollars on your CPUs and other hardware that you people make. I'm done. Not another penny! Ever!
kiwidude - Wednesday, October 12, 2011 - link
I think this shows what a great job Intel have been doing more than confirming your insulting comment about AMD engineers.JohanAnandtech - Wednesday, October 12, 2011 - link
"Brand new and spanking Bulldozer has it roots in ancient K6"There is some K7 heritage left, but I can not see in any way how this CPU relates to the K6! The K6 had a very short pipeline, a unpipelined FPU for example.
As when it comes to the server market: AMD seems to have overclocked and cherry picked the 3.6 GHz FX-8100. For the desktop market, clockspeed rules, so AMD didn't care too much about power consumption.
For the server market, they can go with lower clocked 95 W TDP parts. These should have a much better performance/watt ratio. Also, the server market runs at 30-80% CPU load, the desktopmarket runs a few cores at 100%. So the powermanagement features will show better results in the server market.
The gaming software needs fast caches (latency!) as IPC is decent. The server software is more forgiving when it comes to cache latency as IPC is more determined by the number of memory accesses and thread synchronization. That is the reason why that L3 is so handy. I think you should wait to condemn bulldozer until it is has been benchmarked on our server benchmarking suite.
I am worried about the legacy HPC performance of this chip though.It will take some recompiling before the chip starts to shine in this market.
FunBunny2 - Wednesday, October 12, 2011 - link
Had to get this far in the comment thread for sanity. Clearly, AMD (and one may disagree) has chosen to go for superior integer performance in a threaded architecture. D'oh! So what? It means they don't give a rat's rectum about gamers. They care a whole lot about application and database servers. They also accept the fact that single threaded is dying, so just kill it.Makaveli - Wednesday, October 12, 2011 - link
I stayed up and read this its 2 in the morning excellent review as always anand.But instead of back to the future its back to the P4???
Why AMD WHY for the love of everything holy!
Sind - Wednesday, October 12, 2011 - link
Disappointing.. I hope they can get it together with the aggressive road map.wolfman3k5 - Wednesday, October 12, 2011 - link
I know, right. I'm also patiently waiting for the AMD Bendover architecture. Maybe it will be competitive, who knows...kiwidude - Wednesday, October 12, 2011 - link
Hi, the CPU Specification Comparison chart has incorrect info listed under X6 1100T and X4 980 NB clocks. Great review as always love your work.wolfman3k5 - Wednesday, October 12, 2011 - link
NewEgg doesn't even have any Bulldozers in stock, at all. Not the AMD FX 8150 or AMD FX 8120. I guess that no one is in a hurry to grab one...enterco - Wednesday, October 12, 2011 - link
Hell, Amazon UK doesn't have any Bulldozer neither...ckryan - Wednesday, October 12, 2011 - link
Maybe Newegg filed them under Server CPUs where BullDozer belongs.AmdInside - Wednesday, October 12, 2011 - link
Their roadmap is aggressive but when is the last time AMD has come close to meeting their schedule? Not going to happen. But do hope that they do for consumers sake.Eagle70ss - Wednesday, October 12, 2011 - link
AMD really bent over and grabbed their ankles....I'm just wondering why it took so long to release douche-dozer...I was really hoping they would have a good part this time...Will Intel stand alone as the sole quality CPU maker?? Only time will tell, but it looks to be so....silverblue - Wednesday, October 12, 2011 - link
I must say, I did expect this. That price drop wasn't exactly a giveaway, was it? Single threaded performance is generally poor and there really is something wrong with the caching. I simply refuse to believe a lack of BIOS optimisations is at fault for any of this... and blaming Windows 7 for not truly understanding Bulldozer's idiosyncracies? Come off it; Windows 8 won't even be around when Piledriver appears, and we'll have to wait to see the second generation of this particular microarchitecture performing more like it "should". Bringing back the FX moniker certainly attracted attention, however if by doing so they wanted to remind us of the fact that the FX-51 was a server CPU, they've succeeded, if only on that basis, as the FX was king of all and not just in select benchmarks as the P4 tended to be.I can't wait for Johan's server review; I just want to see if this thing really does well in its natural habitat. It's got to have a success somewhere. Thankfully, I can see far more optimism in this area. Incidentally, I was expecting Bulldozer to be able to work on eight 128-bit FP instructions per clock as opposed to 6 with Thuban, so obviously I got my wires crossed on that one.
You can't argue that Bulldozer hasn't a lot of promise, but at the same time, you can't argue that AMD haven't been trying to perform damage limitation on an already faulty product.
arjuna1 - Wednesday, October 12, 2011 - link
Nobody, and I mean, nobody at all, expected Bulldozer to reach SB like performance, obvious nobody either saw sub Phenom II performance in certain applications, but almost everything promised has been delivered, at lower prices than Intel, the way AMD has always done it, and quoting the article:"In many ways, where Bulldozer is a clear win is where AMD has always done well in: heavily threaded applications. If you're predominantly running well threaded workloads, Bulldozer will typically give you performance somewhere around or above Intel's 2500K."
PS
wolfman3k5, stop your Intel shilling, it almost look like if Intel was paying you by the hour.
wolfman3k5 - Wednesday, October 12, 2011 - link
I get $22.50 per hour from Intel plus tips. I also get a $50.00 bonus if I surpass 1000 comments / posts per day. Between 3:00AM and 7:00AM I get $25.85 per hour. I make good money writing nice things about Intel. What do you do?g101 - Wednesday, October 12, 2011 - link
What's surprising is that you apparently think that's "good money".Guess what, you little dumbshit kid, profit savvy professionals will sill be running AMD. I couldn't care less about your shitty lightly threaded games and optimized synthetic benchmarks.
Stupid children using their computers for play.
silverblue - Friday, October 14, 2011 - link
You need to bear in mind that a) AMD reintroduced the FX brand just for Zambezi, and b) JF-AMD actually started a thread entitled The Bulldozer Blog Is Live! on www.overclock.net. Regardless of whether John Freuhe is a server-focused guy or not, the point being is that he and AMD both targetted the client side in terms of i) overclockers and ii) gamers. I might be wrong, but that's how I see it. Yes, he didn't come out and say it directly that Zambezi would be a great gaming solution, but he DID say that IPC would be an improvement over their past products. Now that the reviews are out, he's nowhere to be seen, barring the odd login to do who-knows-what. Does overclock.net have any leaning towards the server market in any way?If Zambezi's poor performance is partly down to using faulty ASUS boards/anything less than 1866MHz RAM/an L1 cache bug/some weird hardware combinations/WHATEVER, I'm sure we'll find out in time, but regardless, it's going to be harming non-gaming workloads as well, so it's important to people like you as well.
silverblue - Friday, October 14, 2011 - link
Just thought I'd say that I've been a bit harsh to JF there. Out of all the AMD people who could've come along to have a chat, he was definitely the bravest. It was on his free time, and he's probably getting copious amounts of hate messages just for being an AMD rep.Proxicon - Wednesday, October 12, 2011 - link
I stayed up all night to read this review....I guess the prices on 2600k won't be going down anytime soon. I had already built my complete system in my head. Then the reviews came..
I kind of figured that if AMD was firing people and resignations were being handed in before a major launch, it wasn't going to be good. Also, no early release of benchmarks. That in itself was suspect. If they really had such a great processor than why all the secrecy. I was hoping it was an Apple play. boy, was I wrong.
You guys buy the "faildozer" and help keep the prices of the 2600K low. I'll be looking for a 2600K....
3DVagabond - Wednesday, October 12, 2011 - link
I'm not an expert, but Bulldozer seems to be a server chip pressed into desktop service. Designed for highly threaded workloads many consumer tasks just aren't it's forte (and also designed to have even more cores than 8). While it isn't competitive in single thread performance, if you use highly threaded workloads enough and aren't afraid to O/C to boost the single core performance, Bulldozer can be the better chip. That is if the price is right. The 8120 might be an awesome value in this scenario. We'll have to wait for reviews to be sure.One question, please. When you O/C'd the 8150, did you only use stock cooling? From the review it sounded like you did, but instead of saying so clearly, you said it wouldn't do 5GHz on "air" (I believe that was the statement? Feel free to flame me if I'm wrong. :D). So, to be clear, would it not do 5GHz on air with a top notch cooler, or did you only try the stock cooler?
Thanks.
arjuna1 - Wednesday, October 12, 2011 - link
I was wondering the same, the OC part of the review seemed rushed by, almost lazy, I hope Anand can correct this and clear the doubts, can one of this cpus be run @ 5ghz or not?silverblue - Wednesday, October 12, 2011 - link
Anand did say that he doesn't yet possess one of the AMD sanctioned water coolers, but will test with it once he does.arjuna1 - Wednesday, October 12, 2011 - link
More of the reason to have considered in testing it with an aftermarket cooler, if it hits 5ghz only with AMD's sanctioned cooler (which given the insignificant difference between Corsair and Antec offerings wouldn't surprise me if it was just a rebrand) it can be a bit of a problem to those of us already using a similar water cooler.arjuna1 - Wednesday, October 12, 2011 - link
they have it on legitreviews running @ 4.9 with water cooling.silverblue - Thursday, October 13, 2011 - link
HardwareHeaven have theirs at 5.2.Jkm3141 - Wednesday, October 12, 2011 - link
I would LOVE to see how this handles a virtual server workloadJohanAnandtech - Wednesday, October 12, 2011 - link
Patience :-). We will do our best with a new virtualization benchmark besides the old one when the Interlagos server arrives.- Johan.
ghosttr - Wednesday, October 12, 2011 - link
Not only does CPU fail, it fails so hard it even struggles to compete with its aging predecessors. A new architecture AND a die shrink and it can barely hold its own.Whats really sad is that AMD could have updated k10, and probably achieved the same (or likely better) results.
bersl2 - Wednesday, October 12, 2011 - link
I'll probably end up buying one. I'm still on my socket-939 Opteron 165, and I can wait a little bit more. Since many of you seem to be wont to skip this one, I'll probably get it at a better price.Also, since I don't give a flying fsck about Windows, I'll probably get a Bulldozer-aware CPU scheduler before you clowns do. :P
Hrel - Wednesday, October 12, 2011 - link
Seriously disapointed now. I'm glad they put more than 2 freaking SATA 6GB ports on the mobo, but that's a 200 dollar+ mobo so it doesn't really matter.AMD's CPU performance is retarted. Honestly, all the hype, all the delays, this is a disaster. Good thing their GPU division is executing well or I'd be seriously worried about this company being around in 4 years.
Intel needs to stop jewing out on their mobo configurations. I need AT LEAST 4 SATA 6GBPS ports and I was like 12 USB 3.0 ports, but even with my gripes about them cheaping out on mobo's and switching sockets every year or two... at least their CPU's have gotten faster in the last 6 years. Beyond just expected incremental gains like AMD has made.... or this time around hasn't.
Fujikoma - Wednesday, October 12, 2011 - link
Really... in this day and age... 'jewing'???bji - Wednesday, October 12, 2011 - link
Yeah I stopped reading his comment exactly at that word. Disgusting.eh_ch - Wednesday, October 12, 2011 - link
You could try to argue that describing the performance as retarded *might* be syntactically and grammatically correct, but clearly it's meant in the pejorative sense. You could have pointed out the ironic misspelling. But you didn't react at all.Idiots.
bji - Wednesday, October 12, 2011 - link
Sorry, but 'retarded' has already entered common usage and isn't offensive to anyone not looking explicitly to be offended.'Jewing' however has not, not even close.
EnerJi - Wednesday, October 12, 2011 - link
I beg to differ. It very much is offensive to anyone who has a friend, family member, or other person they care about with a mental handicap.g101 - Wednesday, October 12, 2011 - link
What do you expect? The majority of these comments come from idiot children that only care about games and completely misunderstand the point of this architecture.Hrel - Wednesday, October 12, 2011 - link
ya'll need to lighten up and take life less seriously. If you're wasting your time being politically correct then you're wasting your time... nuff said.Hrel - Wednesday, October 12, 2011 - link
I disagree, jewing is totally not offensive. My friends call me jewish all the time and I'm not really any religion AND I'm mostly German. It's just joke dude, lighten up.Hrel - Wednesday, October 12, 2011 - link
I like to spell retarted that way better, that's how I say it. I also spell theater theatre and ever since that movie Inglorious Baterds and I spell bastard basterd. I just like it better.Hrel - Wednesday, October 12, 2011 - link
what's disgusting is that you're so racist it offends you. If you truly aren't racist then it doesn't matter. It's just another way of saying being cheap. And as long as you're not a pent up old politically correct fogey it's humorous.Hrel - Wednesday, October 12, 2011 - link
yes, I use it as a term that means being cheap. My friends often call me jewish cause I hunt for bargains pretty relentlessly.silverblue - Friday, October 14, 2011 - link
I get called Scottish for the same thing. ;)poohbear - Wednesday, October 12, 2011 - link
so disappointing to read this. What on earth were they doing all this time?? AMD's NEW cpu can't even outperform its OLD CPU? well atleast i can stick with my PhenomII X6 till Ivy Bridge comes out & thank goodness i didnt buy a pricey AM3+ before reading reviews.:p So sad to see AMD has come to this.....OutsideLoopComputers - Wednesday, October 12, 2011 - link
I think when Anand publishes benchmarks with a couple of Bulldozers working together in a dual or quad-socket board (Opteron), THEN we will see why AMD designed it the way they did. If the FX achieves parity and sometimes superiority in heavily multithreaded apps vs Sandy Bridge in a single socket, then imagine how two or four of these working together will do in server applications vs Sandy Bridge Xeon. I'll bet we see superiority in most server disciplines.I don't think this silicon was designed to go after Intel desktop processors, but to perform directly with dual and quad socket Xeon.
Its intended to be an Opteron right now, and as an afterthought-to be sold as an FX desktop single socket part, to bridge the gap between A-series and Opteron.
JohanAnandtech - Wednesday, October 12, 2011 - link
Indeed. The market for high-end desktop parts is very small, with low margins, and shrinking! The mobile market is growing, so AMD A6 en A8 CPUs make a lot more sense.The server market keeps growing, and the profit margins are excellent because a large percentage of the market wants high end parts (compare that to the desktop market, where almost every one wants the midrange and budgets). the Zip and crypting benchmarks show that Bulldozer is definitely not a complete failure. We'll see :-)
g101 - Wednesday, October 12, 2011 - link
Good to see an intelligent reviewer that knows how to do more than run synthetic benchmarks and games.It's funny seeing all the uneducated gamer "complete failure" comments.
bassbeast - Thursday, February 9, 2012 - link
I'm sorry but you are wrong sir and here is why: They are marketing this chip at the CONSUMER and NOT the server, which makes it a total Faildozer.If they would have kept P2 for the consumer and kept BD for the Opteron then you sir would have been 100% correct, but by killing their P2 they have just admitted they are out of the desktop CPU business and for a company that small that is a seriously DUMB move. Their Athlon and P2 have been the "go to" chip for many of us system builders because it gave "good enough" performance in the apps that people use, but Faildozer is a hot pig of a chip that is worse for consumer loads in every. single. way. over the P2.
I'm just glad i bought an X6 when i did, but when i can no longer get the P2 and Athlon II for new builds i'll be switching to intel, the BD simply is worthless for the consumer market and NEVER should have been marketed to it in the first place! so please get off your high horse and admit the truth, the BD chip should have never been sold for anything but servers.
haplo602 - Wednesday, October 12, 2011 - link
This is a server CPU abused for the desktop.Have a look at FPU performance. Almost clock for clock (3.3G vs 3.6G) it beats 6 FPU units in Phenom X6. That's quite nice.
Once they do some optimisations on a mature process, this will achieve SB performance levels. However until then I am going for 2389 optys ....
GourdFreeMan - Wednesday, October 12, 2011 - link
You introduce the fact that AMD lengthened the pipeline transitioning to Bulldozed without explicitly mentioning the pipeline length. How many stages exactly is Bulldozer's pipeline?duploxxx - Wednesday, October 12, 2011 - link
Well there clearly seems to be something wrong with the usage of the modules in combination with the way to high latency on any cache and memory. single threaded performance is hit by that and so does lack any gaming performance.So I hope anandtech can have a clear look at the following thread and continue to seek further:
http://www.xtremesystems.org/forums/showthread.php...
secondly during OC just like previous gen, do something more with NB oc in stead of just upping the GHZ, there is more to an architecture then just the ghz....
HW_mee - Wednesday, October 12, 2011 - link
I did not expect Bulldozer to rock the CPU world, but....A Bulldozer has a 256 bit shared FPU which is capable of calculating 2x128 bit FP instructions at the same time vs 128 bit FPU per core in Phenom II
Bulldozer 8150 should be able to process 4x256 bit FP instructions or 8x128 bit FP instructions at a time, while Phenom II 1100T should be able to process 6x128 bit FP instructions at a time.
The short calculation above shows Bulldozer should have an advantage over Phenom II in FPU heavy computations.
The test don't lie and the two processors perform the same, but there should have been a difference, in theory.
HW_mee - Wednesday, October 12, 2011 - link
Need that EDIT button.... Bulldozer has a 256 bit shared FPU per module, which ...
Mr Alpha - Wednesday, October 12, 2011 - link
I believe Carmack mentioned during QuakeCon that the textures are compressed using Microsoft's HD Photo (aka Windows Media Photo).Ryan Smith - Wednesday, October 12, 2011 - link
Aha, I thought it was something like that, but I couldn't come up with the right keyword. Fixed. Thanks!IceDread - Wednesday, October 12, 2011 - link
This is an utter failure from Amd. For a personal workstation or home computer there is simply no reason to choose Amd over Intel. There are even cases when the older Amd cpu is better, which to me looks insane.So we get nearly no new price pressure on the market from this ether. It's just like a silent release, the market wont notice and the customers wont notice that there is a new Amd cpu on the market because the cpu has nothing of interest it can offer. This is really disappointing.
cjs150 - Wednesday, October 12, 2011 - link
You have summed it up well.Worse performance, crap power consumption. Only way to save this is a big price cut say to $150-160.
The AMD approved water cooling system looks to be a gimmick, and I say this as someone who prefers water over air cooling. I do not see the point of CPU only watercooling - if that is all you want then air cooling is cheaper, almost as good and a heck of lot easier to install. IMO CPU+GPU is the minimum if you want to watercool, unless you are into overclocking when a single rad is too small
I guess AMD are rapidly becoming a niche player because as bad as BD is, intel's atom is worse compared to the AMD equivalent
IceDread - Wednesday, October 12, 2011 - link
A very high price cut could save the product like you say but that might mean Amd will lose cash, thou it's not like they wont lose cash already being so far behind. I really do not see a place for this new cpu.This is bad thou, because we need competition on the market because otherwise Intel will only have to take customers willingness to pay a certain amount of cash for a cpu into account, there is no competition.
I like water cooling thou, but that is because I like to overclock some. Water cooling systems can also be more quite but not necessarily. Most people I do not believe will gain on a water cooling system, that would only increase the price of the product.
haplo602 - Wednesday, October 12, 2011 - link
yes there is. socker F and 2389 optys. they are dirt cheap right now :-) if you can get a mobo that supports 2439s then you are golden.g101 - Wednesday, October 12, 2011 - link
I swear, you fucking kids are ridiculously stupid. Those of us that actually use CPU's to their full potential understand that this is far from a 'failure'. You gamer children haven't got a clue what 'future proofed' means.IceDread - Thursday, October 13, 2011 - link
Hey retard, there is no smooth way to utilize this cpu. Trty and realize that.There are few cases where strong cpu's are needed, servers, graphics, data processing and gaming. Most readers on this forum is probably gamers and thus writes from that perspective. Is that so hard to understand.
From my point of view as a solution developer of funds and insurance systems this cpu is not of interest because the alternatives for the servers are better and for client computers the need of a strong cpu is not of interest at all usually and thus this cpu is not of interest there ether.
silverblue - Friday, October 14, 2011 - link
We don't know the server performance yet. There's also some frantic investigations going on into why the client benchmarks were so poor, so watch this space.IceDread - Friday, October 14, 2011 - link
I always observe. I hope AMD makes a comeback because the market needs competition to work in favor of the customer.Analytics believes Amd is becoming irrelevant on the processor market, which is really bad. They believe AMDs share is worth 4 dollar each now and not five dollar which they believed before.
I do not know if new drivers would help Amd out or not but 10% more or less does not cut it. I really hope Amd makes a comeback with a new cpu in a couple of years but only time will tell. How they could release this one is not understandable, they had to known the values.. something is not always better than nothing..
chrone - Wednesday, October 12, 2011 - link
even phenom ii x6 is way better. :(piroroadkill - Wednesday, October 12, 2011 - link
Because there is no longer anything to wait for. Bulldozer is an absolute failure. Oh dear.That's not helpful at all, because a competitive AMD CPU is just what the market needs right now.
MossySF - Wednesday, October 12, 2011 - link
The worse part about this review is the MB diagram. Earlier this year, the diagrams for the SB950 showed a 4GB/s Alink III Expression connection. The diagram in this review shows 2GB/s -- the same as the previous SB850 and competing Intel chipsets.Who cares about CPU speed ... they're all close enough. We need I/O speed way more. I have a new server with5 x 6Gbps Sandforce SSDs in a RAID0 array and we hit the 2GB/s limit with just 2 drives. (2GB/s bidirectional is 1GB/s each way.)
And no, I have zero confidence that any 4-port/8-port RAID controller has enough power either. Maybe the most expensive 24-port ones can do it but I am not going to continually buy and return $2000 controllers until I find one that is beefy enough. Especially since the majority of the RAID functionality is completely wasted with SSDs.
nexox - Wednesday, October 12, 2011 - link
OT, but you may want to try a SAS2 HBA based on the LSI 2008 chip - They're generally around $150 and I promise they can do:A) Far higher performance than any crappy motherboard controller.
B) Way more than 2GB/s full duplex.
If you're worried about storage performance while using an on-board disk controller, you're just going about it all wrong, especially if you think you're going to gain much using their crappy software raid.
gvaley - Wednesday, October 12, 2011 - link
This monster, in this particular test, adds over 144W going from idle to full load! (For comparison the i2600K adds mere 78W and performs a notch better.) Assuming it already wastes ~10W at idle and even by factoring in the increase in power usage coming from the chipset/memory, I still very much doubt it that this...hm..thing...can fit into the stated 125W TDP.Good thing Anand didn't do the performance per watt maths...it would've painted a devastating picture.
f4phantom2500 - Wednesday, October 12, 2011 - link
i think i'll hold on to my unlocked/overclocked phenom ii x3 for now.iwod - Wednesday, October 12, 2011 - link
Intel made most of their money with Server CPU. And If BD perform well there, then i suppose AMD still have some breathing space.Otherwise with Ivy Bridge, AMD doesn't have a single chance of surviving in the near future. Gfx are much better with Ivy, and with its video decoding engine that seems to be much better then even Nvidia or AMD.
milli - Wednesday, October 12, 2011 - link
"Sandy Bridge, which on Intel's 32nm HKMG process is only 1.16B transistors with a die size of 216mm2"But in the table below it you say it's 995M transistors.
In the AMD table, you mention '3MB' as NB Clock for the AMD Phenom II X6 1100T.
Ryan Smith - Wednesday, October 12, 2011 - link
Technically, both are correct. One of us wrote the text and the other wrote the diagram, and each of us picked a value; they just didn't match. Whoops. Fixedhttp://www.anandtech.com/show/4818/counting-transi...
octoploid - Wednesday, October 12, 2011 - link
Anand,are you sure that you have used the right -j number
in this benchmark?
The Bulldozer performance almost look too bad to be true.
http://images.anandtech.com/graphs/graph4955/41699...
Look at these pictures for comparison:
http://www.hardware.fr/medias/photos_news/00/33/IM...
http://www.hardware.fr/medias/photos_news/00/33/IM...
FlanK3r - Wednesday, October 12, 2011 - link
good review La Shimpi. For OC...I think, it is no problem hit 4.8 Ghz stable, but you need better aircooler (some Noctua or so). I hit with D14 4840 MHz stable. Boot up to 5050 MHz. 5250 MHz validation.BR FlanK3r
Cygus - Wednesday, October 12, 2011 - link
Apparently AMD is busy with some sort of bulldozer optimization patch for windows 7. Anand, will you guys be updating your benchmarks once this comes out?Sigh, I was really hoping for better competition to drive the prices down.
BSMonitor - Wednesday, October 12, 2011 - link
"Sigh, I was really hoping for better competition to drive the prices down. "Down from what? These are the same price points that have existed for a decade. You would simply get more performance at the same price point!
Kjella - Wednesday, October 12, 2011 - link
Going from 85W to 229W in power consumption is 144W. If we assume a 80% efficient PSU that's 115W + 10W idle so it's completely maxing the TDP. Look at it, it's almost 100W over the 2500K which is what it's most competitive with. I can't see that being popular at home or in the server space, expensive and a huge problem to cool sufficiently. And that die size, it must cost tons to produce so the margins must be slim and none so crap for customers, crap for AMD. I didn't dare get my hopes up very high but this I would call a total disaster. I honestly did not think it could get this bad.ninjaquick - Wednesday, October 12, 2011 - link
BD seems to be very forward looking, and I think it will be worth it. Look at 7-zip vs Winzip perf (can't remember who did the comp), but BD was worst in WinZip and fastest in 7-zip, like, slower than propus and faster than sandy bridge. Intelligently threaded games really rock with BD but older or less technical games tank with BD.I agree with others that this is more of a 4 core 8 thread CPU, but honestly, counting cores is dumb. Threads are what matter and this can run 8 side by side without adding latency, unlike intel's HT.
And really, clockspeed was 'abandoned' because it wasn't really feasible to pursue it any higher, but properly executed, a high clock solution can allow for deeper pipes without sacrificing latency too much, achieving more per clock at a higher clock rate. And from what I've seen on the powerconsumption side of things, 4.6 doesn't draw as much as 3.7 on my PhII.
IceDread - Wednesday, October 12, 2011 - link
Actually, performance and watt is what matters and this cpu fails horribly in so many areas. This product would have been better of unreleased. You'd have to be somewhat insane to purchase it.FunBunny2 - Wednesday, October 12, 2011 - link
wrong. a thread is a fake core. a core is a real core. a thread shares the core with the rest of its threads. more cores, more fun.silverblue - Thursday, October 13, 2011 - link
Not if "A single Bulldozer module can switch between threads as often as every clock."vectorm12 - Wednesday, October 12, 2011 - link
As both AMD and Intel now use dedicated hardware for AES I feel simply testing AES performance isn't enough. A benchmark of the AES+Twofish+Serpent or atleast AES+Serpent would serve as a more telling benchmark at this point. Don't get me wrong I love that you guys even run a benchmark related to Encryption but it needs to be updated.About BD I'm also extremely bummed out that it didn't turn out better than this. Ofc there might be room for improvements with patches/cpu-driver for windows7 etc but considering the TDP, transistor-count and everything else this is a huge loss for AMD.
I'm still interested in seeing how the Opteron versions will perform in specific tasks as the architecture itself seems really interesting. Someone obviously spent a lot of time thinking this design through and I'd like to believe there's at least one particular workload where BD can actually flex it's muscles for real.
fri2219 - Wednesday, October 12, 2011 - link
Since Bulldozer wasn't created with 3D shooters in mind, it would have been nice to see some financial/engineering/scientific benchmarks instead. Anandtech used to differentiate itself from the kiddie sites by providing that sort of analysis. I guess things change, like my RSS subscription to Anandtech articles will.That said, the power consumption numbers pretty much say everything I need to know about the CPU series- the constraint on almost all HPC is power, not SPECint or peak flops.
chillmelt - Wednesday, October 12, 2011 - link
Unfortunately a huge majority of the enthusiast market are gamers. If you truly want productivity benchmarks then wait for server chips. FX CPUs aren't marketed as such, but does perform like one.With that said, the FX lineup is a decent multi-threading powerhouse, and not flop in that respect.
Read tomshardware's review for more benchmarks.
Malih - Wednesday, October 12, 2011 - link
well, i guess there'll be follow up posts."My sample actually arrived less than 12 hours ago, so expect a follow up with performance analysis later this week."
lagrande - Wednesday, October 12, 2011 - link
I'm not AMD fan boy, but the reason AMD gave is pretty reasonable. Thread locality is an important factor in bulldozer architecture, primarily because the memory latency on the cache level is pretty high. If the OS can't schedule the thread properly to the correct core, then there will be a lot of inter-core data movement and probably problem like false sharing can be more apparent.GatorLord - Wednesday, October 12, 2011 - link
While on one hand as a PC user and builder...and really wanting to build a BD based mindblower, I'm a little disappointed...OK, more than a little...by these results. On the other hand, as an MBA and investor in AMD, I see the big picture and have to reluctantly agree...and hopefully profit.If you have constrained and finite capabilities in both design and manufacture (GloFo needs its butt kicked), you maximize along a marginal ROI track and right now that would be server chips to support the growing and lucrative cloud, data warehouse, HPC, and corporate servers and the growing fusion space integrating modest x86 with robust video on low wattage single chips, you end up with exactly what we have here. BD (a server chip rebranded) and Llano with plans to improve both with descendents.
In highway terms it would be akin to building semis and commuter cars. This is the high performance forum and while the Ferrari Enzos are cool and badass, it's hard to fault AMD for the approach. After all, when you're on the road today, you'll see a bunch of semis and commuter cars...its economics. Performance sells magazines, utility sells products.
BD must be a killer server beast because Cray (you know Cray, right?) just got a $97M contract from Oak Ridge NL about a month or so after taking possession of the first box of BD based server chips. I think Cray knows a thing or two about making computers haul butt.
Now we'll see if any of that translates into the client space...
MossySF - Wednesday, October 12, 2011 - link
I'll agree with this. We have a ton of servers -- both Intel and AMD. More integer cores are better. FPU? Games? 3D? Media encoding? Who cares. Hyper-threading does nothing when you peg all cores with VMs running at full blast. For example, we have 1 configuration where we run 4 VMs on a Phenom II x4 3ghz and it performs roughly the same as our 4-core i7 2.8ghz. If we add a 5th VM, both slow down equally showing that there are simply no free resources in the CPU pipeline for hyper-threading to steal.So where the bulldozer platform is extra good is for cheap / disposable / uniform VM hosts running Linux. Instead of 1 mega expensive quad xeon costing $100K, you have 10 x 1U Bulldozers that can handle 8VMs each at full utilization without speed degradation for $10K. In addition, you'd probably run something like Centos (or RHEL) the default packages are not compiled with Intels uber compiler so many of the +25% you see in benchmarks here don't exist at all in the Linux world.
The most disappointing part though (which I mentioned previously) is the lack of speed improvement for the chipset. The first bottleneck for adding more VMs is CPU core but the 2nd is disk bandwidth. If you have disk intensive VMs, you need a separate hard drive for each VM to avoid HD seek latency killing performance. But putting 8 HDs in a 1U is impossible so you need 2U/4U servers taking up too much rackspace.
The answer of course is a fast SSD ... 500MB/s with 0 latency can be split off to separate VMs with a linear degradation versus exponential for HDs. But the SB950 chipset at 2GB/s bidirectional can only handle 2 fast SSDs. So 1000 MB/s divided by 8 VMs reading at full blast is 125 MB/s per VM -- which is regular SATA3 HD speed. Double that to 4 GB/s and you can put easily put 4 x 2.5" SATA3 SSDs in a 1U delivering 250 MB/s to each VM. Now we're back to at least 2nd generation SSD performance.
(Note, all the Intel chipsets also max out at 2GB/s bidirectional and stuffing a super expensive raid controller in a 1U is not cost effective.)
GatorLord - Wednesday, October 12, 2011 - link
Great analysis...I'm not a server guy and can hardly keep up with the average 15 year old on desktop jargon and theory, but it seems that the bigger cache would mitigate the roundtrips to disk in the conditions you describe. I guess that's why they left that fat L3 cache on the die...assuming Interlagos and Zambezi are really closer than cousins...more like siblings.Great financial case...that I get. I heard a joke the other day that went something like "Whenever they say it's not about the money, it's about the money". It's always about the money... :)
Macabre215 - Wednesday, October 12, 2011 - link
This is reminiscent of the Phenom I launch without the TLB bug. You have a chip that barely outperforms its predecessor and at times performs a little worse. AMD might be able to make a Phenom II like product out of Bulldozer but I I think it's too late. They needed to start out well out of the gate with this one.Right now I'm on a Phenom II and will be upgrading to Sandy Bridge soon. I'm done with AMD on the desktop front; a platform which is probably a dead one in the next ten to twenty years anyway. AMD should just stick to the server market and mobile platforms for CPUs as that's where they have a dog in the hunt.
BTW, this is a disgrace to the FX name.
Iketh - Wednesday, October 12, 2011 - link
I understand why AMD execs resigned in the past 2 years... can you imagine what it musta looked like then? "Nah, we've actually gotten slower per thread, and will need 4ghz+ to compete now..."saneblane - Wednesday, October 12, 2011 - link
What was the cpu usage like, i have a sinking feeling that cpu usage was low for most of the Review. I heard rumors that Amd are working on a patch, it would make sense because Zambezi losses to the atlon x4 sometimes, and that doesn't make any sense to me at all. Their has to be a performance loss on the cpu, whether it is based on the cpu or maybe it's design is hard for windows to handle.this processor can't be this slow.punchcore47 - Wednesday, October 12, 2011 - link
Look back when the first Phenom hit the street, I think AMD will right the ship and update overtime and fix any problems. The gaming performance really looks sad though.
bhima - Wednesday, October 12, 2011 - link
BD will have to drop their prices pretty hard to compete with these benchmarks. They are designed for an even smaller niche than gamers: People who use heavily threaded applications all day.I also don't see why anyone would ever put these procs into a server, with over 100 watts extra of heat running through your system compared to the i5 and i7. Interlagos may be more efficient but the architecture already is very power hungry compared to intel's offering.
Really great way to end the review though Anand, AMD must return to its glory days so Intel doesn't continue to jack consumers. Hell after these benchmarks I could see intel INCREASING their prices instead of decreasing them.
haukionkannel - Thursday, October 13, 2011 - link
Hmm... It seems that BD is leaking a lot of energy when running high freguency! But I am guite sure, that is very good in low 95w usage, with lower freguency. So I think that BD is actually really good and low energy CPU for server use, but the desk top usege is very problematic indeed.Seems to be a lot like Phenom release. A lot of current leakage and you got either good power and weak porformance or a little better performance and really bad power consumption... Next BD upgrade can remedy a lot of this, but it can not make miracles...
I am guite sure, that BD will become reasonable CPU with upgrades and tinkering, but is it enough? The 32nm production technology will get better in time, so the power usage will get better, so they can upgrade freguencies. The problem with single threath speed is the main problem... If, bye some divine intervertion, programers really learn to use multible cores and streams, the future is bright... But most propably the golden amount of cores is 2-4 to far distant future... (not counting some speacial programs...) And that is bad. It would reguire a lot of re-engineering the BD to make it better in single stream aplications and that may be too expensive at this moment. There is some real potential in BD, but it would reguire too much from computer program side to harnes that power, when Intel has so huge lead in single core speed... Same reason Intel burried their "multicore" GPU project some time ago...
We can only hope that fusion and GPU department keeps AMD floating long enough... Or we will have to face the long dark of Intel monopoly... It would be the worst case scenario.
Shining Arcanine - Wednesday, October 12, 2011 - link
Anand, your compilation benchmark tests only single threaded improvements. Would it be possible to do multithreaded benchmark? Just do compilation on Linux with MAKEOPTS=-j9.Also, most of your benchmarks only test floating point performance. It was obvious to me that Bulldozer would be bad at that and I am not surprised. Is it possible to test parallel integer heavy workloads like a LAMP server? Compilation is another one, but I mentioned that above.
know of fence - Wednesday, October 12, 2011 - link
Here is to hoping, that reviews to follow will offer at least some perspective on why single thread performance is still important. Instead just harping on it (as did reviews before it).Everybody can run a benchmark, but it's the broad context and perspective that I came to appreciate to read about in Anandtech reviews, beyond "I suspect this architecture will do quite well in the server space". Mind you I'm not referring to the big AMD vs. INTEL broad strokes, but the nitty-gritty.
geforce912 - Wednesday, October 12, 2011 - link
Honestly, i think AMD would have been better off shrinking phenom II to 32nm and slapping on two more cores.tech4tac - Wednesday, October 12, 2011 - link
Agreed. An enhanced 8 core Phenom II X8 on 32nm process would have used ~1.2B transistors on ~244mm^2 die (smaller than Deneb & about the size of Gulftown) as opposed to the monstrous ~2B and 315mm^2 of a Bulldozer 8 core. Given the same clock speed, my estimates have it outperforming the i7-2600 on most multi-threaded applications. And, with a few tweaks for more aggressive turbo under single core workloads, it would have at least been somewhat competitive in games.Bulldozer is a BIG disappointment! It would need at least another 4 cores (2 modules) tacked on to be worth while for multi-threaded applications. AMD has stated it is committed to providing as many cores as Intel has threads (Gulftown has 12 threads so 12 core Bulldozer?), so maybe this will happen. Still... nothing can help its abysmal single core performance. If they can do a 12 core Bulldozer for less than $300, I might get one for a work machine but stick with an Intel chip for my gaming rig.
Shadowmaster625 - Wednesday, October 12, 2011 - link
Companies this incompetent should not be allowed to survive. They bought a GPU company 5 years ago, and have done absolutely nothing to create any type of fusion between the cpu and gpu. You still have a huge multi-layer, multi-company software bloat separating the two pieces of hardware. They have done nothing to address this, and it is clear they never will. Which makes the whole concept a failure. It was a total waste of money.HalloweenJack - Wednesday, October 12, 2011 - link
and the day after intel triples its cpu prices... is that what you want?$500 entry level cpu`s?
GatorLord - Wednesday, October 12, 2011 - link
Consider what AMD is and what AMD isn't and where computing is headed and this chip is really beginning to make sense. While these benches seem frustrating to those of us on a desktop today I think a slightly deeper dive shows that there is a whole world of hope here...with these chips, not something later.I dug into the deal with Cray and Oak Ridge, and Cray is selling ORNL massively powerful computers (think petaflops) using Bulldozer CPUs controlling Nvidia Tesla GPUs which perform the bulk of the processing. The GPUs do vastly more and faster FPU calculations and the CPU is vastly better at dishing out the grunt work and processing the results for use by humans or software or other hardware. This is the future of High Performance Computing, today, but on a government scale. OK, so what? I'm a client user.
Here's what: AMD is actually best at making GPUs...no question. They have been in the GPGPU space as long as Nvidia...except the AMD engineers can collaborate on both CPU and GPU projects simultaneously without a bunch of awkward NDAs and antitrust BS getting in the way. That means that while they obviously can turn humble server chips into supercomputers by harnessing the many cores on a graphics card, how much more than we've seen is possible on our lowly desktops when this rebranded server chip enslaves the Ferraris on the PCI bus next door...the GPUs.
I get it...it makes perfect sense now. Don't waste real estate on FPU dies when the one's next door are hundreds or thousands of times better and faster too. This is not the beginning of the end of AMD, but the end of the beginning (to shamlessely quote Churchill). Now all that cryptic talk about a supercomputer in your tablet makes sense...think Llano with a so-so CPU and a big GPU on the same die with some code tweaks to schedule the GPU as a massive FPU and the picture starts taking shape.
Now imagine a full blown server chip (BD) harnessing full blown GPUs...Radeon 6XXX or 7XXX and we are talking about performance improvements in the orders of magnitude, not percentage points. Is AMD crazy? I'm thinking crazy like a fox.
Oh..as a disclaimer, while I'm long AMD...I'm just an enthusiast like the rest of you and not a shill...I want both companies to make fast chips that I can use to do Monte Carlos and linear regressions...it just looks like AMD seems to have figured out how to play the hand they're holding for change...here's to the future for us all.
olafmetal - Wednesday, October 12, 2011 - link
Would it be possible to benchmark Civ V's turn duration (or level creation or load times)?That would be a more cpu dependent measurement then frame rates I would think.
artemisgoldfish - Wednesday, October 12, 2011 - link
It's like Barcelona all over again. OMG THIS IS GOING TO BE SO GREAT and then an underwhelming part.James5mith - Wednesday, October 12, 2011 - link
This is a sad day. The Bulldozer design is basically AMD repeating Intel's mistakes. They were able to zip past Intel back then because Intel focused on clockspeed, and they focused on IPC. Now it seems to be the other way around."Those who do not learn from history...."
hasu - Wednesday, October 12, 2011 - link
"Heavily threaded workloads obviously do well on the FX series parts, here in our 7-zip test the FX-8150 is actually faster than Intel's fastest Sandy Bridge"We work on a product which is heavily multi-threaded (around 90 threads on one component). Wondering if FX-8150 would be a better bet to lower hardware costs.
tynopik - Wednesday, October 12, 2011 - link
first page: Bulldozer patsSunagwa - Wednesday, October 12, 2011 - link
After such a long wait I feel so....underwhelmed...I was hoping for something competitive that would force Intel to lower there prices or make me want to buy an AMD CPU. Oh well, off to buy my 2600k.
dervisionar - Wednesday, October 12, 2011 - link
(first post, btw)very disappointing performance. I'm in 100% agreement with everyone displeased about the single threaded performance and find it laughable that they'd think performance would be gimped because Win7 doesn't use the cores properly.
I'm actually a fan of the A8-Llano series of processors and wanted to see what the hybrids of Bulldozer with an integrated 6700 or 6800 series AMD Graphics would be like. Think about it: it could have been a media center from hell's dream chip. "Good Enough" graphics for gaming, HD acceleration, and plenty of PCI-E slots on a mATX to pop in as many satellite, OTA, and/or digicable tuners to make you sick. And have the main heat generators all cooled by one trick water/copper air cooler. That would have been pretty sweet.
But they're going to have alot of tweaks to do to make this competitive for anyone, myself included, to want to commit to it. I've been an AMD user ever since the K6-II days when they were at intel's heels and up to the 939 days. But then I finally went Intel Wolfdale and currently have a 2600K. They have a good thing with the purchased ATI offerings... but in order to sell fusion, the other piece they're "fusing" into the chip needs to be enticing.
Ramshambo2001 - Wednesday, October 12, 2011 - link
This is just a sad. I was really hoping a comeback for AMD with this architecture.HalloweenJack - Wednesday, October 12, 2011 - link
been reading the scheduling in both windows7 and linux kernal is a mess for bulldozer and when theres a patch performance will improve. any truth to this? its what i`ve read over on XSHector2 - Wednesday, October 12, 2011 - link
Ivy Bridge on 22nm (Sandy Bridge on steroids) is just around the corner. It's not going to be prettysuperccs - Wednesday, October 12, 2011 - link
If Ghz is BD's salvation, then why did you post some OC'd benchmarks? Cmon, really you are going to make us do it?chizow - Wednesday, October 12, 2011 - link
Anand, you said it yourself. AMD hasn't been competitive since 2006 and Core 2. Has there been a real choice in the CPU space for the last 5 years? Do we even need choice? The fact of the matter is Intel is only competing with *THEMSELVES* in the CPU space and this has been the case for the last 5 years. They need to keep dropping prices and releasing faster processors because that's the only way they are going to sell more CPUs, irrelevant of whatever drek AMD is releasing in their wake.I really wish tech writers and pseudo-economists would stop repeating the lie of a meme that "we need competition" in order for innovation to continue, when that's simply not the case.
bji - Wednesday, October 12, 2011 - link
Your comments are a gross oversimplification. There are many segments of the x86 market and AMD has been competitive in some of them over the past 5 years. Just because they haven't been competitive in your favorite segment (which I am guessing is high end) doesn't mean that they haven't been competitive at all.I bought an Atom board and a Zacate board and the Zacate was clearly and thoroughly superior. So AMD wins at the very low end.
At the low midrange I believe that AMD has always, except until maybe very recently, been the better choice; of course AMD has achieved this by basically making no money on their parts which obviously is not sustainable.
At the mid-midrange AMD has been sometimes competitive and sometimes not over the past 5 years. I personally bought a 6 core Phenom II last year around this time because the workload I care about - massively parallel software compiles - was heads and shoulders better on AMD than similarly priced Intel at the time.
Once you get into the high midrange and all segments of the high end, then I finally agree with your comments - AMD has not been competitive there for 5 years.
On the laptop front, AMD has mostly been uncompetitive in the time frame in question except where they have reduced price to an unsustainable (for them) level and stolen the occasional win. The only win for AMD in that space is, once again, their Atom equivalents which win in netbooks.
Your post is an oversimplification. However, if AMD doesn't pull a rabbit out of their hat, then for the next 5 years your oversimplification will end up being true.
Competition driving innovation is not a lie, and your appeal to people to stop repeating this truth will go unheeded so don't waste your breath.
chizow - Wednesday, October 12, 2011 - link
No, its not really a gross oversimplication, its just an accurate generalization born out repeatedly over the last 5 years.Intel has AMD beat in every single price-performance segment in the x86 market except in the rare cases where mGPUs come into play, like your Zacate example. Which is ironic in and of itself, given some of the asinine remarks AMD leadership has made in the past with regard to ULP products.
Just look at this low to mid-range line-up, the pricing segments have been steady for the last 4-5 years during Intel's dominance, the ONLY things that change are the SKU names and clockspeeds.
http://www.anandtech.com/show/4524/the-sandy-bridg...
This all occurs independent of any meaningful competition from AMD, as every single Intel option offers better performance for the same or slightly more money.
Even in multi-threaded apps Intel often achieves performance parity with FEWER cores and less TDP. Its embarassing really how Intel can even compete with AMD's 6 core processors with their *2* core processors even in the threaded apps you're referring to.
http://www.anandtech.com/show/3674/amds-sixcore-ph...
Tick Tock doesn't stop when AMD stumbles, nor does Intel's commitment to process fabrication leadership. Why? Because their investors and consumers demand innovation INDEPENDENT of what AMD does or does not do to compete. Competition does not drive innovation in the x86 desktop market and has not for the last 5 years, the bottom line does.
Al1458 - Wednesday, October 12, 2011 - link
AMD is the only viable alternative to Intel. If AMD should not get out of this slump, we could well see AMD leave the CPU market which would leave one CPU manufacturer, and we all know how restrained Intel is with its pricing...hoohoo - Wednesday, October 12, 2011 - link
I really want to see some server benchmarks: LAMP & IIS stack, DBMSs, HPC benchmarks.I do not think the real potential is being explored, (for FPU intensive code especially) a recompile is needed with a compiler that knows what a BD is.
Kyanzes - Wednesday, October 12, 2011 - link
Simply horrible. Surely there has to be some mistake.mapesdhs - Wednesday, October 12, 2011 - link
Sometimes I think it's a pity a company like IBM doesn't make a consumer chip, ie. a rival to Intel
which has enough cash and expertise to do the job properly, come up with something that can be
a good fit for both the consumer & server spaces. IBM certainly has plenty of CPU design experience
for servers, eg. the 4.25GHz 8-core Power7.
Ian.
FunBunny2 - Wednesday, October 12, 2011 - link
They did. Called the PPC. Nobody bought it.sanityvoid - Wednesday, October 12, 2011 - link
What really needs to happen is that AMD needs to merge/be bought by IBM. Let the CPU chips come from a big company that knows chips.The ATI side of AMD is pretty good. I don't know what the answer is, but it shows that AMD needs some help in getting back up to Intel's level.
SKYSTAR - Wednesday, October 12, 2011 - link
horrible gaming performance , AMD rest in peace (like the undertaker said )Burticus - Wednesday, October 12, 2011 - link
Years of wasted development effort. Why couldn't they have just die shrunk a couple Phenom II x4 or x6's onto 1 chip?Looks like I know what my new build will be... i5 or i7. Sad days. I have been AMD exclusive since that awful Prescott P4 I bought back in 2003.
bcanthack - Wednesday, October 12, 2011 - link
This is interesting, even though the bulldozer release was today with what is considered bad results, their stock price is up almost 4%? Is it possible the investors know something the customers don't?GatorLord - Wednesday, October 12, 2011 - link
The financial press reported that AMD was the second most 'upgraded' stock in the S&P 500 in the past week...with 3 upgrades from analysts...it's contextual though since they are upgrading it from poor ratings like sell, hold, or market underperform.The bigger reason is that when they reported the GloFo headaches as they are required to do, the investors brutalized the stock and drove it below $5...once it went below $5 a lot of pension funds, etc... have rules against holding 'penny stocks' and then they had to sell driving it even lower, eventually to $4.31 when it bottomed out.
At that point, the stock purely on a P/E basis is cheap even with the headaches and folks like me step in figuring that even if they get a little traction there's a huge upside...for all the bitching, manufacturing problems and glitches, etc... almost always get worked out. Most processes have positive dynamic stability, meaning that things tend to return to a mean steady state, so if it's well below the mean for temporary reasons, it'll go back up and vice versa.
The buyers are just putting the stock closer to where it needs to be based on the ground truth...that said, it's still cheap as hell and could triple just to get to the S&P average P/E. This is one of those times when you'll look back and say, I could've had AMD below $5.
saneblane - Wednesday, October 12, 2011 - link
The linux scheduler is more advance than the one in windows, so if windows can't deal with the cpu, let's see if it works the way it should under the linux environment. The reviews on the net are way to inconsistent on windows, legit reviews have the bulldozer cpu, neck to neck with the i7 2600, check it out. http://www.legitreviews.com/article/1741/1/jellowiggler - Wednesday, October 12, 2011 - link
The article states that more clock speed is needed to negate the architectural limits on single threaded performance. Then in the very brief OCing results you indicate that the processor will easily clock from 3.6 default to 4.6. But there are no bench marks of how that impacts the performance in any of the tests.I wan to know if I can buy a processor for $250 that will OC to 4.6 on stock cooling which makes it competitive. Both in single thread applications, and stomps other CPUs out at muli-threaded tasks at that speed.
Lets is the OC results and temps compared to the regular benches please.
jellowiggler - Wednesday, October 12, 2011 - link
Sorry for the grammar. My iphone ate my post.mythun.chandra - Wednesday, October 12, 2011 - link
In the Rage vt_benchmark section, you mention"Since transcoding is done by the CPU this is a pure CPU benchmark."
This is true only if you're using non-NVIDIA GPU's. For people using NVIDIA GPU's not more than a couple of years old, a lot of the texture transcoding is offloaded from the CPU and done on the GPU using CUDA, and this gives a pretty good speedup.
silverblue - Wednesday, October 12, 2011 - link
...this article is worth looking at:http://www.hardwareheaven.com/reviews/1285/pg1/amd...
They used an ASRock board, however their gaming tests seemed to push towards a GPU limit (using a 6950), rather than a CPU limit. However, one might argue that you would really be pairing a fast card with such a processor and using very high settings.
There's something very fishy going on here as well. Take a look at these two pages:
http://www.tomshardware.com/reviews/fx-8150-zambez...
http://www.hardwareheaven.com/reviews/1285/pg11/am...
The 580 and CVF-equipped setup is being trounced by the 6950 and Extreme4 setup. What gives? As most of the reviews I've seen have been based upon on the CVF motherboard, is it entirely possible that *gasp* ASUS made a bad board/BIOS, or is there something rather odd about HH's setup?
Discuss!
vtohthree - Wednesday, October 12, 2011 - link
..but because Intel doesn't have to try hard to compete, here we were sitting as consumers waiting for a proper response from AMD so that Intel would be on their toes to unleash more potential in their chips or lower prices, but this is sort of sad...ET - Wednesday, October 12, 2011 - link
I recently bought a Phenom II X6 1090T to upgrade an X3 710. Looks like a better deal now. BD's power draw in particular is disappointing. No doubt Intel is what I'd recommend to others. I've have been using AMD CPU's for several years now (Athlon XP 2100+ was my first), and I still like AMD, but I'm disappointed by BD, especially after the long wait.Loki726 - Wednesday, October 12, 2011 - link
Thanks Anand for the compiler benchmarks.It seems like performance on bulldozer is highly application dependent, better at data-parallel and worse (even than Phenom) on irregular and sequential applications.
I'll probably skip this one.
I don't mind this tradeoff, but the problem is that AMD already has a good data-parallel architecture (their GPU). I'n my opinion they are moving their CPU in the wrong direction. No one wants an x86 throughput processor. They shouldn't be moving the CPU towards the GPU architecture.
AMD: Don't pay the OOO/complex decoder penalty for all of your cores. If your app is running on multiple cores, it is obviously parallel. Don't add hardware that wastes power trying to rediscover it. Then, throw all your power budget at a single or a few x86 cores.
Beat Intel at single threaded perf and then use your GPU for gaming, and throughput processing (video, encryption, etc).
I'm not a fan of Intel, but they got this right. If they get over their x86 obsession and get their data-parallel act together they are going to completely dominate the desktop and high end.
dubyadubya - Wednesday, October 12, 2011 - link
Care to share which tests are 64 bit? Each bench program used must specify if its 32 or 64 bit. Why do all review sites forget to includet this critical info? From the limited results I can find on the net AMD see's a large performance increase running 64 bit code over 32 bit code while Intel see's little if any increase.HangFire - Wednesday, October 12, 2011 - link
I've got an Asus board that promises to support BD, and I've holding off upgrading my unlocked/overclocked 550BE for literally months, and for this? I might as well just get a Phenom II quad or 6-core.I've said all along that AMD needs to address their clock versus instruction efficiency to be competitive. To do that they need to redesign their cores and stop dragging along their old K8 cores.
So here we are with Bulldozer, Wider front end, TurboCore now works, floating point decoupled, 8 (int) cores, and... still flogging the same instruction efficiency as the old K8 cores (at least, the integer portion of them).
Oh, yeah, I'm sure at the right price point some server farms will be happy with them, and priced low enough, they can hold on to the value portion of the marketplace. To do both they'll have to compete aggressively on price, and be prepared to lose money, both of which they seem to be good at.
Like Anand said, we need to see someone actually compete with Intel, but it appears that AMD has lost the ability to invent new processor cores, it can only manipulate existing designs. Instead of upgrading the CPU, it looks like I'll go for a full Intel upgrade, unless I can find an 1100T real cheap. Hmm, that's probably a real possibility. I'm sure a lot of AMD fans are going to be trading them in now that they see what their AM3 upgrade path is(n't).
alpha754293 - Wednesday, October 12, 2011 - link
I think that you should clarify the difference between what you call "server" workloads (i.e. OLTP/virtualization vs. HPC).I suspect that with one shared FP between two ALUs; HPC performance is going to suffer.
The somewhat-computationally intensive benchmarks that you've shown today already gives an early indication that the Bulldozer-based Opterons are going to suffer greatly with HPC benchmarks.
On that note: I would like to see you guys run the standard NCAC LS-DYNA benchmarks or the Fluent benchmarks if you can get them. They'll give me a really good idea as to how server processors perform in HPC applications (besides the ones that you guys have been running already). Johan has my email, so feel free to ask if you've got questions about them.
Ananke - Wednesday, October 12, 2011 - link
Bulldozer reminds me of Sun's (Oracle) Niagara architecture. It seems AMD aimed the server and professional market. It makes business sense. The profit margins there are net 50-60% (this is AFTER the marketing, support, etc overhead costs) and along with the high performance work stations is the only growing market as of now. Hence, the stock market lifted the stocks of AMD. Gaming and enthusiast market is around 0.7% of CPU revenue - yep, that is, I work with this kind of statistics data, guys.This is a promising architecture (despite the fact that is not good for home enthusiasts). AMD should focus on providing more I/O lanes through the CPU - aka PCI lanes on cheaper boards without requirement of additional chips. It will allow placing more GPUs using overall cheaper infrastructure - exactly the way HPC and server market is evolving. Then, they should really get a good software team and make/support/promote SDK for general GPU computing in line of what NVidia did with CUDA.
For anything mainstream / aka Best Buy, Walmart, etc./ Llano is good enough.
As I said, this Bulldozer chip apparently is not good for enthusiasts, and Anandtech is an enthusiast site, but unfortunately this is just a very small niche market. People should not bash a product, because it doesn't fit only their needs. It is OK for the vast market.
GatorLord - Wednesday, October 12, 2011 - link
Thanks for the VERY interesting stats. I had a hunch it made good sense, but since I don't work with these data it was just a hunch in the end. Now it's better...maybe a hunch plus. We should feel lucky that they even pay any attention to this segment...I suspect they do b/c a lot of decision influencers are also computer racers at home.alpha754293 - Thursday, October 13, 2011 - link
One thing that I will also say/add is that while people are perhaps grossly disappointed with the results; think about it this way:What you've really got is a quad-core (I don't count their ALUs as cores, just FPUs) processor doing a 6-core job.
So if they went to a 6-module chip, the benefits can actually be substantial.
And on the 8-module server processor, it can be bigger even still.
And yes, this is very much like the UltraSPARC T-series (which, originally was designed by UltraDense as a network switching chip), but even they eventually added a FPU per core, rather than just one FPU per chip.
The downside to the 8-module chip is a) it's going to be massive, and b) it won't be at the clock speeds it NEEDs to be to compete.
Icehawk - Wednesday, October 12, 2011 - link
I quickly ran the Rage vt_benchmark and got ~.64 @ 1thread and .25 for 2-6 threads which is what your Intel #s line up with - BUT I'm running a Q6600, 4gb, and a GTS 250... shouldn't I see much worse scores compared to a i7/current get video card? Is this something to do with Rage's *awesome* textures or?Shuxclams - Wednesday, October 12, 2011 - link
Wow... I mean what else do you say? Intel is walking away from the competition, and thats sucks for everyone long term. FAILezinner - Wednesday, October 12, 2011 - link
AMD, please up your game. Stop being just the lower price point and become the leader. We do not need 10 to 20 processors to choose from. Let's have less products and better performance, even if the price is the same or more than Intel. I like that you stick with the socket longer than Intel, but you keep getting beat everytime. AMD needs new people who can drive them further ahead.BlueFlash - Wednesday, October 12, 2011 - link
New architectures always require software optimization to shine. Will enough Bulldozers sell to convince major software vendors to do that work? Could we get some AMD optimized benchmarks? In any case, I prefer the APU strategy, given my compute needs.Chaser - Wednesday, October 12, 2011 - link
I invested in AMD last a while back and like many on here thought that BD was going to restore AMD's prominence as the enthusiast's definitive choice once again. Thankfully, especially now, I pulled out my money when it was still safe and also decided to not wait for my next upgrade rig and thankfully (now) I went Z68, I7 2500K and GTX 580.Maybe my hopes for AMD CPU wise was a pipe-dream. It just seems that INTEL is so far ahead of them now with their rather aggressive development that AMD is destined to be the bargain basement alternative forever. Regardless of my lack of realism or false hopes I suppose, its still a downer.
bji - Wednesday, October 12, 2011 - link
At a certain point in the not too distant future, the x86 market will be stagnant enough in terms of sales growth (with mobile devices taking larger and larger shares of the computing market) that it will not pay to spend the exponentially increasing R & D dollars to advance x86 state of the art. At that point, AMD will have an opportunity to catch up to Intel - if AMD survives in the x86 market long enough.FunBunny2 - Wednesday, October 12, 2011 - link
Always remember: no one, that's *no one* not even Intel, runs X86 instruction set in hardware. No one. All these X86 cpus emulate is instruction set through a RISC core. It's only a matter of time until nobody bothers with the emulation.Mishera - Wednesday, October 12, 2011 - link
Let me try to paint a glass half full picture:Amd's CPU division had to have known that the performance of this architecture for years now... So I imagine that this might not be just a "Let's just get something out there" move. So maybe there's some alternative ways of looking at this:
1. Amd has made it clear that this is the beginning of what they really mean by APU and there will be more architecture movement towards that end in the future. Could this be a transistion CPU to a possibly very different x86 future?
This may not be just what AMD is planning to do on it's own. There have been rumors of a possible Arm partnership.
http://semiaccurate.com/2011/06/22/amd-and-arm-joi...
Could it be that this chip has a better set up than phenom for an interchangeable modular structure?
2. With desktop pcs in decline could this be a move to capture more server space, which is still a growing market?
3. This obviously also begs the question is there some benefit of this design with a gpu attached?
4. Is there some benefit for mobile computing?(starting to stretch here...)
I don't know. I don't want to think this is just an epic fail but...
This is kind of sad since I was betting they'd be 3/3 but on the really big one the shot out a dud.
fic2 - Wednesday, October 12, 2011 - link
Kind of wondering about #3.Let the wait for Trinity begin. Hopefully the changes from Bulldozer->Piledriver help more than is expected.
If the FM1 platform wasn't dead I would think about getting a Llano desktop for my nephew but I think he'll just have to wait. Maybe if FM2 mb start coming out and Llano can drop in with a possible change to Trinity later.
Mishera - Thursday, October 13, 2011 - link
Yeah, I think I meant #3 to be part of #4. I cant help but to think that is bulldozer performs poorly in a trinity package that's very bad news for Amd, even if this performs well in servers..I am left wondering how much of bulldozer as an architecture is an evolutionary step to the future they've described. I think that's much more important than the performance we see today. Perhaps this could in that respect be a Fermi and not so much Pentium 4.
tipoo - Wednesday, October 12, 2011 - link
Their own last generation 6 core Thuban beats this in half the tasks. If you have an application that effectively uses 8 threads, this might be a worthy upgrade, but anything lightly threaded is pretty bad (looking at the real world benchmarks, Techreport, Anandtech, Tomshardware, etc). I had high hopes for this.They did say though that Windows 8 would make better use of modules (vs cores) and will know what to do with them better, and to expect another 10% increase from that. But we're still at a point where an AMD core doesn't even beat out a Nehalem core, let alone a Sandy Bridge or Ivy Bridge.
*Le Sigh*
This Bulldozer is more of a Mudshovel. Their goal of 15% single threaded performance increase per core per year won't have them catching up to Intel anytime soon either.
Well, the optimist in me says the L3 cache-less higher clocked quad core mainstream parts will be more competitive. And cheap too, the 6 core FX is only 169.
madseven7 - Thursday, October 13, 2011 - link
How could the 6core fx be competitive? For $169 you could get a Phenom 2 that beats the 8150 fx. Imagine what it would do to the 6 core fx chip?jerkstorez - Wednesday, October 12, 2011 - link
Was pretty hyped for this chip to come out, and if it was a decent performance boost, I was ready to upgrade from my Phenom II 1090t. It looks like it'd barely an upgrade in many areas and a definite downgrade in others. With a whole new architecture and all the hype, I expected a lot more, at least better performance than the current generation of AMD chips. Very disappointed.boobox - Wednesday, October 12, 2011 - link
What is with the settings choice for all these benchmarks?Who is buying these processors and running games on medium settings and such low resolutions?
I was hoping to see 1920x1080 at least for tests on the highest settings.
JarredWalton - Wednesday, October 12, 2011 - link
It's a CPU review, so settings were selected to show both CPU-limited (or at least more CPU limited) and GPU-limited (or more GPU limited than CPU). And of course, that's only one facet of the overall review.tipoo - Wednesday, October 12, 2011 - link
Maxing out the GPU would create a bottleneck and hide the CPU's performance.CoolGoodGuy - Wednesday, October 12, 2011 - link
I read this extensive review. However after the last page, your mention about "windows scheduler problem" made me to think this tests might be Biased. So, I thought of posting this comment.Windows is compiled using Intel Compiler which optimises code very well for Intel processors but doesn't do that for AMD. Where as Linux is mostly compiled using GNU GCC. So, Linux bench marks would be more neutral for both Intel and AMD processors.
Also, now a days a lot of Desktop users have started using Linux Distros like UBUNTU, and in servers Linux is mainstream.
Server users would be greatly benefited by a Linux CPU benchmark.
So, I would request you to include Linux benchmarks for processors in your reviews.
Thanks,
JarredWalton - Wednesday, October 12, 2011 - link
Server benchmarks will be coming from Johan at some point, though obviously those require a lot of time to put together. As for the compiler of Windows, you're never going to change that, and Windows is still 90%+ of the market. Ultimately, as a hardware manufacturer you need to make hardware that runs the stuff people are doing faster than the competition or there's not much point. It's like having the world's fastest GPU with crappy drivers: no one will like it because it can't run games.FunBunny2 - Wednesday, October 12, 2011 - link
Here's your answer:http://developer.amd.com/tools/gnu/pages/default.a...
Burticus - Wednesday, October 12, 2011 - link
Wait wait wait wait wait, wait some more, wait for years... then.... bleh. Doesn't even match up to the last product. That is not the way to move the bar forward or stay in business, AMD.At some point why didn't someone just say.... you know what? This thing may sell like hotcakes for servers but just doesn't make sense for desktop. Sell the Opterons to get that server profit margin, and just die shrink the Thuban and make it faster for desktop. Know when something is a lost cause and fold the hand already.
Geforce man - Wednesday, October 12, 2011 - link
Any chance for a few benchmarks at the overclocked settings? it would be quite interesting to see, as the 4.6Ghz range seems to be a common OC for an i7 2600k (Obviously this would be slower, but it'd be nice to see)ET - Wednesday, October 12, 2011 - link
I gather you're trying for a job at AMD.wolfman3k5 - Wednesday, October 12, 2011 - link
Even the most hardcore AMD Fanboys are having a hard time defending this bulldozed Bulldozer chip.I have seen people blaming anything from Global Foundries to saying that "It's not all that bad."
If you're an AMD Fanboy in Denial, you should read what's wrong wigt Bulldozer:
1) AMD's marketing is extremely misleading, because Bulldozer is not more of a Quad Core chip than Intel's Core i7 for example. What AMD did was to create a crude implementation of CMT (chip-level multithreading), which is debatable as of now if it's any better than Intel's Hyper Threading, with the mention that Hyper Threading takes less than 5% of DIE Space per Core. Basically each Module (as AMD likes to call its Cores now) has an additional Integer Unit, but the problem is that the additional Integer Unit shares the Fetch and Decode units with the other Integer Unit in the Module. Ergo, the CPU is not a true 8 Core. It is just a cheap AMD marketing gimmick.
2) The reason for the horrendous single threaded performance is the poor design of the Integer units in the modules (you can interpret this as very low IPC count). Couple that with the fact that two Integer Units share the same Fetch and Decode units, and when one additional thread is added that screws things up and the pipeline needs to be flushed, it messes things up for the other Integer Unit as well. At a bare minimum AMD should have added a Fetch and Decode unit for each Integer Unit. But wait, if they would have done that, then they would have needed to add a dedicated FPU for each separated Integer unit. Hell, this is almost worst than Hyper Threading. At least Hyper Threading makes use of unused resources in the CPU. AMD's implementation can leave resources unused.
Bottom line: no matter how you slice and dice it, the Bulldozer chip cannot be called an 8 core chip. It's a 4 Core CPU with CMT.
3) In order to compensate for the design flaws in each Module (what I've described in point no. 2), AMD increased the L2 cache to almost absurd levels for each module, to mask some of the latency created by their poor CMT implementation. Intel's Sandy Bridge does away with 256KB L2 cache per core. Hence, we have ~1.6 Billion Transistors in Bulldozer, which creates leakage, which in turn translates into high power draw. Those of you that defended this failure of a CPU claiming that it was designed for servers should know that Performance / Watt is all that matters for servers, because you run into cooling issues otherwise, which translate into high energy costs both for the servers themselves and for the cooling which is done mostly by Air Conditioning. And we all know that Air Conditioning draws lots of electricity.
Bottom line: it's not Global Foundries fault, it's AMD's fault for not designing a more efficient CPU with less transistors (Sandy Bridge quad core has ~1.12 Billion).
Belard - Wednesday, October 12, 2011 - link
Those are excellent points.Its like AMD decided to DO EVERYTHING wrong that Intel and Nvidia did and put it all onto a single chip!!
The Fusion CPUs provided small size, good performance and excellent GPU at a good price. Bulldog is a mutt of a mess.
Brinip - Wednesday, October 12, 2011 - link
After years of being an avid amd fan. I finally switched this year to Intels sandybridge CPU. Amd could no longer compeat with the pure power of the sandybridge core.I was really hoping the bulldozer would re-write the books and put amd back on an even footing. In some ways they have re- written the books as the architecture of the new CPU is quite a change, however for now at least they don't pose ant threat to Intels superb sandybridge core.
Not only are sandybridge CPUs fast, they are also fairly low power and run very cool. What more can you say really. Fast, cool and efficient.
Perhaps as amd develop it further then improvement may come, however don't expect intel to sit still either.
LordConrad - Wednesday, October 12, 2011 - link
Looks like AMD jumped on the NetBurst bandwagon. My next computer build will be Intel.Artas1984 - Wednesday, October 12, 2011 - link
QUOTE:I was hoping for Bulldozer to address AMD's weakness rather than continue to just focus on its strengths
UNQUOTE
And while the new chip does feature a slight improvement in mutli-threaded apps, it's pound per pound performance is slower than AMD Phenom II X6 1100T
8 core, 32 nm, 8 Mb L2, 3600 MHz CPU SLOWER than 6 core, 45 nm, 3 Mb L2, 3300 MHz!!!
Seriously...
W H A T T H E F U ^ K I S T H I S S H I T???
But it's not a big deal. There are greater problems in this world that cpu debates. At least we have AMD and Intel. Imagine if there was only one company making processors - my god that would be a problem then...
ClagMaster - Wednesday, October 12, 2011 - link
The Bulldozer would be a dynamite server processor provided the OS was optimized for the cores that are available.I think the processor has real potential but needs further optimization with a latter stepping.
As for PC useage, performance of the FX-8150 is not bad but I could do better with an i7-2500. I am a low power person and if I were opting for a PC upgrade tonight at NEWEGG, I would choose the i5 2400S and a X68 Intel motherboard.
jaygib9 - Wednesday, October 12, 2011 - link
I don't know about the rest of you, but personally, if I'm going with an 8 core CPU, I'd go water cooling anyhow. Air cooled pc's should be on their deathbed and water cooling should be the norm. AMD and Intel are leaving more CPU's unlocked for overclocking anyhow. What do you think the performance would look like at 5 GHz/core compared to stock speeds?jaygib9 - Wednesday, October 12, 2011 - link
Personally I wonder if soon they'll require water cooling and quit worrying so much about the TDP(within reason of course), then they can concentrate on the clock speed.Belard - Thursday, October 13, 2011 - link
Geee... a 25% overclock for the fake 8-core CPU will not make up for the 30~50% performance disadvantage over a cheaper $200 intel CPU.THAT is a serious problem.
Water cooling is stupid, in general. Less than 1% of desktop computers have water cooling. They will fail sooner or later.
silverblue - Thursday, October 13, 2011 - link
Isn't the AseTek cooler self-contained?jaygib9 - Thursday, October 13, 2011 - link
Belard, water cooling is what many of the higher end gaming systems already run. What makes it stupid? It's far more effective than air cooling, it just requires more equipment and is a little more costly. You say a 25% overclock won't make up the performance difference, but what about possibly going up to 6 GHz/core with water cooling? Do you really think that wouldn't have some pretty good numbers? Hey silverblue, I'm not sure.dillonnotz24 - Wednesday, October 12, 2011 - link
This is a rather naïve sounding post, but it just occurred to me and I figured I might share.Now, I'm putting a lot of faith in simple marketing gimmicks here, but bear with me, and you might find this excellent food for thought.
When I first discovered the leak conceding AMD's new Bulldozer consumer CPU's, I was kind of put off by AMD's naming scheme. FX 8150 seems like such a small number, and obviously wouldn't appear appealing to the eyes of un-savvy consumers. Now, one might find this claim a bit irrelevant, but if you look at history, numbers sell. Even AMD confirmed this when it launched its new series naming jargon, the "A4's, A6's, and A8's." This is quite obviously a marketing illusion to make AMD processors appear better than Intel's Core i3's, i5's, and i7's in an area that most unaware consumers will see first: the name!
That said, I started thinking about processor branding. In the past, AMD has used a really strict branding system for its last two CPU designs. A Phenom II's part name very consistently correlated with the CPU's clockspeed and, therefore, performance. Slap on and extra +5 to the name and you got an extra 0.1 GHz of CPU frequency. Also, the higher end CPU's were always placed in the higher spectrum of the thousands place. The top of the line quad cores populated the numbers 925-1000, while the hexa-cores resided in the 1000-1100's. The rebranded CPU's based on original Phenom and Athlon architectures were given much lower values in the 1000's place, with the very popular 555 BE being a prime example. With Llano, the top-end A8-3850 reiterates this phenomenon. The further the part name extends from the number "4000," the less performance you received from the CPU and GPU, relatively incrementally. So, as you can see, AMD consistently used this strategy to give value to their parts without listing a single specification. Larger numbers generally means more performance, and to the casual onlooker, unfamiliarity with the performance range you actually received from the processors in comparison to Intel's made that sub-$200 price point look really tasty.
So, I say all that to present the following theory. Given that these processors can reach 4.6 GHz on air, and the unicorn-like 5.0 GHz (presumably) on AMD's water cooling solution, there seems to be a lot of headroom for AMD to pull off the most unprecedented comeback in the history of computing. That's right, I'm saying that maybe AMD intends to release new Bulldozer variants with upped clockspeeds and an actual included water-cooling solution for a raised pricepoint
dillonnotz24 - Wednesday, October 12, 2011 - link
...a raised price point. Could we see a future Bulldozer AMD A8 8950 @ 4.5 GHz with water-cooling bundled for $350 once Global F. Gets it's act together with producing reliable chips? Think about it...AMD's CPU frequency stepping and naming is nowhere to be found with these CPU's, and they are all huddled down around the number 8000. If this is actually the very bottom of the spectrum, this would mean that the very low end Bulldozer variants were on par with the best of Phenom II. Subsequently, the higher end Bulldozer's I propose would have nothing to lose, but anything to gain with higher clock speeds. All they can do is go up! With higher clockspeeds, Bulldozer could make up for all its woes seen here today in single and double threaded applications, which comprise nearly 50% of consumer level apps. There's potential here, but I will admit to those of you who find this whole concept absurd, I have my doubts. Can AMD do it? They'd have my eternal respect, and wallet, if they do.Belard - Thursday, October 13, 2011 - link
Sooner or later.... someone (perhaps Anandtech) will benchmark a 5Ghz AMD FX 8000 series CPU.If said 5.0Ghz CPU (water cooled) is still SLOWWWER in any way compared to a $200 intel 2400 (3.1Ghz) or the $210 2500. Who would care to buy such a $300~350 chip?
Okay.. I upclock the 2500k to 4ghz and it kills the 8150 at 5~6Ghz.... Nobody buys the 8150 or higher. It just doesn't matter... its too slow.
stephenbrooks - Wednesday, October 12, 2011 - link
They support FMA instructions but then don't fuse multiply and add micro-ops to *make* FMA instructions (as far as I can tell from the article). That's stupid.The way they've done it, everyone has to get a new compiler to take advantage of their chips. If they created FMAs in the muop-fusion stage, then even older software would get a boost too.
mczak - Wednesday, October 12, 2011 - link
You can absolutely not fuse mul+add on the fly to fma as the results will be different. Now you can argue the result is "better" by omitting the rounding after the multiplication but fact is you need to 100% adhere to the standard which dictates you need to do it. Software might (quite likely some really will) rely on doing the right thing.For the same exact reason compilers can't do such optimizations neither, unless you specifically tell them it's ok and you don't care about such things (it's not only standard adherence but also reproducability - such compiler switches also allow them to reorder things like a+b+c as a+(b+c) which isn't allowed neither otherwise for floating point as you can get different results which makes things unpredictable).
(gcc for instance has a -ffast-math switch which I guess might be able to do such fusing, I don't know if it will though I know you can get surprising bugs with that option if you don't think about it...)
stephenbrooks - Thursday, October 13, 2011 - link
Thanks for explaining that. I'd kind of assumed FMA would just round as if the MUL happened first. Defining it "correctly", they've thrown away a lot of compatibility for a really marginal increase in accuracy!Pipperox - Thursday, October 13, 2011 - link
Nope, it's not marginal.Basically with your "fused madd" you'd get code which on Bulldozer gives slightly different results than on any other CPU.. silently.
This is called a bug.
It is just not acceptable for a CPU to produce "optimizations" which alter even slightly the expected numerical output, because then the programs which run on them would fail in very slight and hard to track ways.
mczak - Thursday, October 13, 2011 - link
That isn't quite correct. There is a real demand for fused multiply add, not doing rounding after the mul is something which is quite appreciated. You just can't use fma blindly as a mul+add replacement, but it's perfectly defined in standard floating point too nowadays (ieee 754-2008).Besides, it would be VERY difficult for the cpu to fuse mul+adds correctly even if it would do intermediate rounding after the mul. First the cpu would need to recognize the mul+add sequence - doable if they immediately follow each other I guess, though requires analysis of the operands. Unless both instructions write to the same register it also wouldn't be able to do it anyway since it cannot know if the mul result won't get used later by some other instruction.
This is really the sort of optimization which compilers would do, not cpus. Yes cpus do op fusion these days but it's quite limited.
stephenbrooks - Thursday, October 13, 2011 - link
Not trying to argue with you about the accuracy issue - if "FMA" is defined in a certain way, that's how it's defined and an instruction that rounds differently is a different instruction.However, imagine AMD could implement a "Legacy FMA" (LFMA) instruction in their FPU - which would round as if the MUL came first. You could then fuse MUL, ADD pairs into LFMA instructions without producing bugs. Not sure whether the two types of FMA could be done on the same hardware (they are basically different rounding modes) without a large overhead though.
I don't really understand why there's a big demand for not rounding after the MUL because normally these instructions show up in code like
for (n=1000;n>0;n--) total+=a[n]*b[n];
...and the potential rounding inaccuracy comes in the add stage: there are often lots of adds in sequence, but not normally lots of MULs, and adds suffer more often from the problem of accumulating many small values. Anyway, I know in my code there are lots of instances of doing the "multiply add" operation, and it would be nice to have some sort of CPU acceleration for this.
TheDude69 - Wednesday, October 12, 2011 - link
The Party is over! AMD is no more! They have successfully designed themselves out of the desktop CPU business. I applied for their CEO position and they went with a Moron! You can all kiss low, desktop CPU prices goodbye! Congratulations INTEL you now have a CPU Monopoly!! We can only hope that Nvidia will come through with a Super fast Tegra that will outperform Intel in the netbook arena.I don't think AMD can pull a rabbit out of it's hat now. Guess I'll cave in and buy an i7.
I feel like I'm going to puke........................
policeman0077 - Wednesday, October 12, 2011 - link
I am a newbie, have quite a lot of questions after reading the review of bulldozer.1. what is heavily thread tasks?does matlab count as heavily thread task?I heard matlab use a lot of FP resource? If so,how can bulldozer beat i5 with only 4 lower efficiency FPs ? Does browsing a lot of website simultaneously count?
2. if single core cpu A and B have same frequency but different efficiency and work on same task without full load. They will accomplish the task in same time?
3. so if single core/thread performance is very important, the situation I aforementioned(if is true) totally doesn't show the benefit of a high efficiency core? (didn't consider power consumption.)
4. Does many application will let a core fully loaded and they won't split the task to another core? What kind of application suit this kind of situation? Any example?
5. in the case of 2, if another application request cpu's resource, will the core with high efficiency get quicker response?
6. in the case of 2, consider multi core cpus A and B. if one core of these two cpus are nearly full loaded, at the same time, another application request the source of cpu. And the operate system decide to let this application work on another idle core. Will higher efficiency core response fast?
TheDude69 - Wednesday, October 12, 2011 - link
Don't bother....I have been following this saga since AMD's first CPU and, until today, was an AMD Fan boy. Buy an i7 now before Intel triples the price for this CPU!GustoGuy - Wednesday, October 12, 2011 - link
I am really surprised that AMD didn't at least match the i7 in a majority of the bench marks, and what is even more disturbing is that it sometimes performs worse than a Phenom II X4 on some bench marks. AMD could have tweak this processor with all the time it took and had a stationary target with the i7 so I am perplexed at why they were not able to get it to benchmark at least as well as an entry level i7. Hopefully it will be like the first generation of the Phenom x4 where AMD was able to add an l3 cache and tweak the overclocking abilities so they could rease a product that was at least competitve with the Core Duo processors. I like AMD however they have to be copetitive with Intel and can not afford to give up in some cases a 50% decificiency when compared to the i7.Belard - Wednesday, October 12, 2011 - link
There isn't much AMD can do with this. They are "planning" on having a TICK TICK TICK 10~15% performance increase with yields, higher clocks and tweaks, which what they and intel normally do.True, we CANNOT expect AMD to compete directly with Intel. They simply don't have the resources. Not the money, not the talent, not the manufacturing abilities. Perhaps, if they were not HELD DOWN by intel during the AMD32~64 days, they'd have made the much-needed profits to afford a much bigger R&D department. There was a point at the end of the AMD64/X2 dominance in which AMD couldn't make enough CPUs.
If the 8150 was marketed as it is... a quad core CPU and was across the board, no more than 15% slower than the 2600 at a price of $220 (The 2600 sells at $300) then it would be considered a GOOD buy. But its worse than that in performance and price.
It takes years to develop a new CHIP. It would take 1-2 years to fix the problems with bulldog, if they could be fixed. But look at it this way, how did intel fix their P4/Netburst problem? Oh yeah, they developed a whole new design!!
BD is a s flawed as the P4. Its very difficult to FIX a HUGE CPU.. and SB is about half as complex and half the size of BD! So what... AMD is going to add even more junk to the design?
Hence, it costs AMD about twice as much to make such a CPU compared to intel. So do the math. Intel makes more profit per CPU. For AMD to compete, they would need to reduce their price by 25~30% - which means almost NO profit.
AMD is screwed. They'll really need to work with Llano a lot more... and look at burying Bulldozer with something else.
If Piledriver does somehow kick butt (there are no indications that it will) - too bad, a large chunk of AMD users would have already moved on to intel. And when Piledriver does finally hit the market, intel will have already released an ever faster CPU.
Did I say AMD is screwed?
Belard - Wednesday, October 12, 2011 - link
Seriously?While I won't call myself an AMD fanboy - as I own both intel and AMD systems... I've been drooling for a Sandy Bridge like AMD part. I buy and sell AMD systems for years for desktop users. In general, I prefer AMD chipsets over intels, I like your GPUs, etc... With the release of BD (Bulldozer) FX chips... the WAIT IS OVER!!!
My next system will be an Intel... my next customer builds will be intels with 2300~2600k CPUs.
I *CANNOT* sell my customers a sub-standard part, which is what BD is. Why the hell would I have them spend $250 for a CPU that can't constantly compete with a $150 or 2 year old CPUs?
I think we know why Rick Bergman left AMD, I don't see him signing off on such a crappy CPU. Seriously, why bother? Llano OLDER Fusion design is more attractive than this insulting FX garbage.
What AMD has done with the release of these BD/FX chips is created more sales for intel, nothing more. Only a fool would buy an FX 8150... just like the fools who spent $1000 on the Intel EE CPUs (okay, not quite that dumb since these AMDs are 1/4 such prices) These 8core CPUs are actually 4 core, 6 = 3 and 4 is a dual core. An enhanced version of Hyper-threading by Intel 10 years ago.
There is a SEVERE problem when your "8 core" CPU can't surpass intel's $150 dual core CPUs. Why AMD, why did you take a page out of intel and Nvidia and do the SAME stupid thing? This *IS* your Netburst and FERMI all wrapped in one. A BIG, HOT, EXPENSIVE and SLooooow product that doesn't impress anyone, other than the stupidity of the design. You think WE should wait 2-3 years for you to ramp up speed to 5-6Ghz to say you're competitive with TODAYS intel CPUs? I don't think so.
After an hour or so of reading this review, here is what happened. 5 sales for desktop builds have just gone to Intel i-whatever-it-is 2500 & 2600s. You make me and others LOOK LIKE FOOLS waiting for Bulldozer or Bulldog to come out and kick some intel butt. You didn't. No, we were NOT expecting you to surpass Sandbridge (SB)... but if your "$250 8150 Best CPU" was at least up against the 2500~2600 in performance, it would be acceptable. But on Newegg - this $280 CPU is slower in most benchmarks to the i5-2400 which is $180. The 8150's power usage and heat is through the roof from the faster i5-2400 which is $100 cheaper and faster in games and most productivity.
No gamer in their right mind would spend nearly $300 for a CPU that is about 25~40% slower than the similar or cheaper priced Intels. Big deal if they are unlocked... so are the K chips, which would only pull ahead further.
(We could use a review showing a 5Ghz 8150 vs a 5Ghz 2600K - but I would expect the AMD deficit to remain)
The heatsinks on SB CPUs are tiny compared to AMD... that means less noise, less heat.
If a client needs a custom budget computer, I'd go with a $100~130 AMD CPU... that is it. If AMD wants to compete with the CURRENT Sandy Bridge, the 8150 will need to be a sub $200 part (Hey, isn't intel about to drop their prices??) Their BS "4 core" will need to be $100... but we'll need to see how it performs in the real world... to see if its worth that much money.
This article has over 250 posts in less than 24hrs.... and its the voices of very unhappy AMD users.
I still can't believe AMD went the P4 route. They spent years trying to CHEAT performance and this is the result? Luckily there is lots of demands for cheap CPUs and ATI GPUs which should keep AMD alive.
descendency - Thursday, October 13, 2011 - link
The thing that bothers me most is the Dirt 3 performance.According to an AMD rep at the AMD launch press-conference, games like Dirt 3 would be able to utilize the Bulldozer's "8 cores" to deliver awesome performance. The truth is that it does worse than the 1100T (the one I already own).
wolfman3k5 - Thursday, October 13, 2011 - link
It's finally here! I have been waiting for one of these. It's another Hitler Video, this time it's about Bulldozer. Funny as hell...The video can be found here: http://www.youtube.com/watch?v=SArxcnpXStE
Artas1984 - Thursday, October 13, 2011 - link
Hey thanks for notice! I was expecting this already!GatorLord - Thursday, October 13, 2011 - link
OMG! That is some seriously funny sh1t! I thought I'd blow a kidney or something...I'm trying to type this though the tears I was laughing so hard!I800C0LLECT - Thursday, October 13, 2011 - link
I think majority of these comments show just how fickle consumerism is in America. Anyways, tomorrow's vision vs. current real world performance is the rats nest.They obviously pushed this towards server markets. Maybe that's why there wasn't much fanfare with the marketing gurus?
The performance obviously doesn't reach out to the niche market of computer gamers. Let's see how lucrative this becomes if AMD is able to crack the not so trendy server market. Those guys don't like to break old habits. Stability is kind of a big deal.
I can also see how this design creates a plug and play product for many different markets. The downside to that is it's one design for all which has already proven inefficient for Gamers. But what about consumer electronics? They generally want cheap and simple. Performance be damned.
Interesting hand AMD.
CharlieBBarkin - Thursday, October 13, 2011 - link
I'd hate to break it to you, but even though Bulldozer was targeted towards the server market, it is a complete non-starter in that segment. Look at the power consumption of the Bulldozer. It's off the charts, and it has less raw performance than Intel chips. I can't imagine any system administrator dumb enough to install Bulldozer chips into any sort of compute or server farm. Why would a farm waste money powering and cooling Bulldozer chips when it would be so much cheaper and higher performance to just use Intel CPUs?silverblue - Friday, October 14, 2011 - link
Could just be the ASUS board causing the issues. At any rate, once you overclock past a certain point, power usage just accelerates madly, and you're not going to see these sorts of high frequencies on the server anyway so the point is rather moot. Additionally, with servers, they're a little more focused on power efficiency than with client machines. Magny Cours was a 12-core CPU and the 6176 had a TDP of 105W if I'm correct, so despite its 2.3GHz clock speed, that's not too bad considering.luckylinux - Thursday, October 13, 2011 - link
I also have waited for a long time to finally see if I could replace my phenom ii x4 and x6 with the new super bulldozer. Nevertheless I'm pretty disappointed by the raw performance of this new chip.I began using amd products about one year and a half ago, so I'm not really an amd fanboy ... however began to like them for their choice of not doing the intel shitty hobby of switching 3 sockets every 2 years.
Took an athlon x2, x4, phenom ii x6 and two E-350 from them and very happy for what you get (a bang for your money).
However ... looking at the athlons (and even more the phenoms) power consumption is rather disappointing compared to sandy bridge cpus which I recently bought (yeah, I did not want to leave a 24/7 machine on drawing 60 Watts at idle when SB idles at much less with their power gates tech).
Bulldozer power gates are rather disappointing. Hoped for much higher frequency or lower power consumption at load due to the transistor shrink. And 2B transistors ????? Seeing as some compenents are shared across cores I think this is WAY too much !
BUT one thing deserves to be said. In my case (but that's just me, eh) I wanted a multicore processor which supported both AES-NI and ECC memory. For ECC you can either take an ASUS AM3(+) motherboard (about 120$ the cheapests of am3+), either a 1366 or a 1155 C202/4/6, which costs about 260$ at least ! For AES-NI the only alternative seemed up til now to go with Xeon which cost quite a bit more. Furthermore Xeon cpus are not so easily to get your hands on.
I think that no one gave credit for their efforts to implement AES-NI. If you want a home server that's a very appreciated bonus.
Although I can understand why many of the users here are angry at the new chip because it doesn't perform very well in gaming, the choice of amd to disregard single threaded apps in itself is quite good. In a market where in a few years we'll see 40+ cores on a single desktop CPU (well, in server that'll be next year with Komodo !) what the heck can you obtain with a single core ? Idling 39 cores to speed up one core to 8GHz (assuming that's electrically possible). Silicon dictates the limits on the frequency you can use in your chip. AMD understood this long ago, when intel and their S775 tried to surpass the 3GHz wall with the Pentium IV. Since we're going in a multithread world, 40 x 4GHz (something like that at 22 or 10nm I think) will perform WAY better that 1 x 8GHz (provided that apps are well developed). Why INTEL does not understand this (S2011 provides only 6 cores !) ? However with 2B transistors a few more cores could've been added :D AMD has to work better at power consumption. Instead of clock per clock performance they focus on # of cores which is what the future is ! You will say that intel's performance per core is almost the double, but amd cores are twice so ... How long will intel continue to release quads when amd'll get to sixteen ? If you're an intel fanboy just say that amd BD is shit, but I think this kind of strategy is going to pay in the long term. In fact amd manages to put more cores in a single die, whereas intel is always 2 cores behind. That alone shows that AMD engineers aren't idiots !
luckylinux - Thursday, October 13, 2011 - link
EDIT: Komodo will have 20cores (only :D)I800C0LLECT - Thursday, October 13, 2011 - link
More cores?I guess I was trying to infer that as fabrication processes we shrink to 22nm, etc. How much over head is reduced by plugging in a few more cores vs. new design and architecture?
I'm wondering if AMD just set the foundation for something big?
Seems like they bet on software making some leaps and bounds.
wolfman3k5 - Thursday, October 13, 2011 - link
It's finally here! I have been waiting for one of these. It's another Hitler Video, this time it's about Bulldozer. Funny as hell...The video can be found here: http://www.youtube.com/watch?v=SArxcnpXStE
just4U - Thursday, October 13, 2011 - link
Ok, in several reviews now I've heard about the lacklustre single threaded performance.. Just how bad is it? If you had to compare it to another cpu out there which intel and which amd cput would it compare to?Desperad@ - Thursday, October 13, 2011 - link
Promises of AMDhttp://blogs.amd.com/work/2010/08/23/%E2%80%9Dbull...
http://blogs.amd.com/work/2010/08/30/bulldozer-20-...
http://blogs.amd.com/work/2010/09/13/bulldozer-20-...
http://sites.amd.com/us/promo/processors/Pages/ope...
silverblue - Thursday, October 13, 2011 - link
And here's me thinking you were either banned or got lost on another tech site.Oh, by the way, you usually say "craps". Seems your Engrish has improved a little.
SanX - Thursday, October 13, 2011 - link
(to destroy the company).SanX - Thursday, October 13, 2011 - link
"BullShiter"grant2 - Thursday, October 13, 2011 - link
You're the tech expert writing a commercial article... so why can't *YOU* give us a judgement?gvaley - Thursday, October 13, 2011 - link
Reviewers tend to avoid extreme conclusions and in this case, it would have been an extremely conclusive conclusion.cjs150 - Thursday, October 13, 2011 - link
Just got the pricing in the UK.AMD FX-8150 is about £30 or $45 MORE expensive than the i5 2500k but £50 or $75 cheaper than i7 2600k
As Anand said BD can just about hang on to the i5 coat tails (and he is being generous). If the i5 is noticably cheaper what exactly is the point of BD?
Tchamber - Thursday, October 13, 2011 - link
I can understand ebing disappointed in the performance of bd, but when a high end gpu requires 600w, whats another 30w for a cpu? Lower is nice, but how many of us who game and have a nice cpu/gpu combo actually count the watts? Heck, when i got my first i7 920 i got the gtx285 thinking i would later run sli so i have a big psu. Now. I have a i7 970 and the same gpu and can still upgrade to whatever card i want. I tend to think multithreading is still growing, and we will see more apps use more cores, and windows 8 might utilize an fx core more efficiently. But calling bd a failure is rough, amd never said it would trounce anything, we were promised 8 cores and we sorta got them. It is an. Incremental step in the right direction, and i think the future improvements will bear out in favor of this cpu. Just like llano is doing so well in the laptop market, this could do very well in the desktop market.Bytales - Thursday, October 13, 2011 - link
We must ALSO remember the fact that windows 7 does not know of the special bulldozer architecture, and perhaps that has a role too.Once the threads are optimal allocated, perhaps performance will be a little bit better.
eagle-i - Thursday, October 13, 2011 - link
I use linux (opensuse11.4) for everyday work and would love to see if there is any difference.(linux (and other open source software) being open source is far versatile so it is in a better position to take latest cpu advantages offered by amd
I use virtualbox to run windows (in opensuse) [ cant use xen/kvm due to non vt-x/d on intel cpu -- here amd is far better they offer you the latest thereby helping accelerate its adoption]
Also, BD is a new architecture and m sure after refinement it should better AFAIK , its a right step and its now upto AMD if they can pull through with refinement.
GiSWiG - Thursday, October 13, 2011 - link
My Athlon XP 2500 (1.8GHz) overclocked to 2.5GHz stably, smacked any Intel chip. My Athlon X2 was again a nice 700MHz overclock (can't remember model number). I have an Phenom II X4 965BE. 3.4GHz and 8GB of RAM, 1300MHz @ 6-6-6-18. I'm happy with it. From sleep, Win7 is at the login prompt before my monitor wakes up. I've stopped my PC gaming days. I occasionally encode DVDs to high quality x246/mkv (~3hrs per 2hr movie) queued overnight and it is fine.AMD was a powerhouse but I've not been overly impressed since the Bartons. I'm quite happy with my setup. Really, crossfire-ing two highend AMD video cards and I'm set for any game. Gaming performance is dependent on video cards, not CPUs. I'm fine @ 100 FPS vs. 120 FPS. Your eyes will more than likely never see the difference.
AMD made a good business decision taking over ATI. They are beating Nvida in many ways, including game consoles. They allow PC gamers to have adequate motherboard/CPU/RAM combos and use the money they save for higher end video cards. Unfortunately, gamers head to Intel because they think they need they highest end CPU and RAM when they really need to sink more money into video cards.
I think AMD is stronger than most think because of the price/performance ratio. If you only had $1000 to build a gaming PC, you'll be better off spending less money on AMD CPU and more on video and still have a faster PC to spend the same ratio with an Intel setup.
I've always wish for one thing: AMD NEEDS TO ADVERTISE! Come up with a nice 6 note jingle (or 8)!
Iketh - Thursday, October 13, 2011 - link
I play FSX and Starcraft 2... both require copious processing power.And I just built an i7-2600K system with a radeon 6870 and blu ray writer for.... $960
paultaylor - Thursday, October 13, 2011 - link
While the benchmarks are very revealing of the "ahead of its time" nature of Bulldozer, I think AMD should've kicked off by focusing on server applications instead of desktop ones.Considering what I've seen so far I think some additional benchmarks on threading/scaling would come in handy – it would actually show the true nature of BD as, right now, it’s behaving like a quad-core processor (due to the shared nature of its architecture, I presume) in most cases, rather than an octacore. Charting that out might be very revealing. The situation now looks like Intel's 2nd (3rd?) generation hyperthreading quad-cores provide more efficient multithreading than 8 physical cores on an AMD FX.
Don’t get me wrong, we’ve heard from the beginning that BD will be optimised for server roles, but then we’re outside the feedback loop. Shouldn’t someone inside AMD be minding the store and making sure the lower shelves are also stocked with something we want?
A longer pipeline and the old “we’ll make it up in MHz” line reeks of Netburst, unfortunately, and we all know how that ended. Looking at the tranny count, it’s got almost twice as many as the Gulftown, with 27% bigger die size for the entire CPU… which will mean poorer yields and higher costs for AMD, not to mention that either the fabbing process is really being tweaked or the speed bumps will not come at all, as the TDP is already high-ish. Ironically it reminds me of Fermi. Speaking of which… BD may become the punchline of many jokes like “What do you get when you cross a Pentium 4 and a Fermi?”
On the other hand it seems AMD has managed one small miracle, their roadmaps will become more predictable (a good thing from a business perspective) and that will exert a positive influence with system integrators. Planning products ahead of the game, in particular in this 12-month cycle, might do some good for AMD, if they survive the overal skepticism that BD is currently "enjoying".
Other than that, another fine unbiased article.
rickcain2320 - Thursday, October 13, 2011 - link
Bulldozer/Zambezi seems to look more like a server CPU repackaged as a consumer grade one. Excellent in heavily threaded apps, not so hot in single threads.One CPU that is promised but isn't here is the FX-4170. I would have liked to see some benchmarks on it.
gvaley - Thursday, October 13, 2011 - link
We all get that. The problem is, with this power consumption, it can't make it into the server space either.kevith - Thursday, October 13, 2011 - link
Having waited so long for this, it´s a bit disappointing, when I compare price/performance.I went from C2D E 7300 acouple of years ago, and changed setup to Athlon II x2 250, and the performance difference made me regret right away.
And now, I have to change my MB and memory to DDR3 no matter what I choose, Intel or AMD. So I´ve looked forward to this release.
And it makes my choice very easy: I´l go back to Intel, no more AMD for me on the CPU side. And Ivy Bridge is coming, and will definetely smoke AMD.
Which is sad, it would have been nice with some competition.
eccl1213 - Thursday, October 13, 2011 - link
Earlier this week most sites reported that the FX and BD based Opteron 4200 and 6200 where both being released on Oct 12th.But I haven't found a single review site with interlagos benchmarks.
Have those parts been further delayed? We know revenue shipment happened a while back but I'm not seeing any mention of them in the wild yet.
xtreme762 - Thursday, October 13, 2011 - link
I haven't bought an Intel chip since 1997. But with this BS bulldozer launch, that is now going to change! amd should be ashamed of themselves. I for one will now sell all of my amd stock and purchase Intel. I will probably only end up with a few shares, but at this point, I cannot see supporting liars and fakes. And I will NEVER buy an amd product again, not a video card, cpu, mobo, not nothing! What a disappointment amd is.....All the amd crap I have will be tossed in the trash. I'm not even going to bother trying to sell it. WooHoo amd made a world record OC with a cpu not worth it's weight in dog poo!
connor4312 - Thursday, October 13, 2011 - link
Very interesting review. I'd be interested to see Bulldozer's benchmarks when it's overclocked, which, if I am correct, is higher than any Intel CPU can go. AMD seems to have made a turnaround in this aspect - Intel CPUs were historically more overclock-able.Suntan - Thursday, October 13, 2011 - link
As always, a very detailed review. But what about the capability of the "value" chips? Namely, is it worth it to spend around $100 to replace an Anthlon X4 with an FX4100?There are a number of us that picked up the X4 a couple years back for its low cost ability to encode and do general NLE editing of video. Is it worthwhile to replace that chip with the FX4100 in our AM3+ mobos? And what kind of improvements will there be?
As you rightly stated, a lot of us are attracted to AMD for their bang-for-buck. Just because the industry as a whole wants to bump up prices endlessly, there are still a lot of us that like to see good comparisons of the performance of CPUs available for around 1 Benjamin.
-Suntan
Pipperox - Thursday, October 13, 2011 - link
Frankly, it seems to me the disappointment of AMD fans to be quite excessive.Worst CPU ever?
What was then Barcelona, which couldn't compete with Core 2?
Bulldozer, set aside old single threaded applications, is slotting between a Core i5 2500 and Core i7 2600K.
Which other AMD CPU outperforms in any single benchmark a Core i7 2600k?
A higher clocked Thuban with 2 extra cores would have been hotter and more expensive to produce.
Setting aside AMD's stupid marketing, the AMD FX-8150 is a very efficient QUAD core.
The performance per core is almost as good as Sandy Bridge, in properly threaded applications.
Then they came with the marketing stunt of calling it a 8 core.. it's not, in fact it doesn't have 8 COMPLETE cores; in terms of processing units, an 8 core Bulldozer is very close to a Sandy Bridge QUAD core.
The only reason why Bulldozer's die is so large is the enormous amount of cache, which i'm sure makes sense only in the server environment, while the low latency / high bandwidth cache of Sandy Bridge is much more efficient for common applications.
I think with Bulldozer AMD has put a good foundation for the future: today, on the desktop, there is no reason not to buy a Sandy Bridge (however i'm expecting Bulldozer's street price to drop pretty quickly).
However IF AMD is able to execute the next releases at the planned pace (+10-15% IPC in 2012 and going forward every year) THEN they'll be back in the game.
saneblane - Thursday, October 13, 2011 - link
Man, you have a lot of optimism. I am a big Amd fan, but even i can remain optimistic after this mess, I mean how do you make a chip that is slow, expensive and losses to it's older brothers. Barcelona was a huge success compare to this, it only seemed bad because Expectations were high, this time around though they became higher because no one expect Amd to actually go backwards in performance. WOW that's all i can say WOWPipperox - Thursday, October 13, 2011 - link
I don't understand why you all think it's slower than its older brothers.It's not, it's faster than Thuban in practically all benchmarks...
Or do you really care about stuff like SuperPi?
Pipperox - Thursday, October 13, 2011 - link
But maybe you guys think that it's slower "clock for clock" or "core for core".It doesn't matter how you achieve performance.
What matters is the end performance.
Bulldozer architecture allows it to have higher clock speed and more *threads* than Phenom.
The penalty is single threaded performance.
Again you can't compare it to an hypothetical 8 core 4.0GHz Thuban, because they couldn't have made it (and make any money out of it).
I'll repeat, the FX-8150 is NOT an 8-core CPU.
Otherwise the i7-2600K is also an 8-core CPU... both can execute 8 threads in parallel, but each pair of threads shares execution resources.
The main difference is that Sandy Bridge can "join" all the resources of 2 threads to improve the performance of a single thread, while Bulldozer cannot.
They probably didn't do it to reduce HW complexity and allow easier scalability to more threads and higher clock speed.
Because the future (and to a large extent, the present) is heavily multithreaded, and because Bulldozer is targeted mainly at servers. (and the proof is its ridiculous cache)
bryman - Thursday, October 13, 2011 - link
how about some bios screenshots? Is there a way in the bios to disable the northbridge in the chip and use the northbridge on the motherboard? Possibly better performance, or maybe add a new ability to x-fire northbridges? (Yah imah Dreamer). imo, I dont think adding the northbridge to the cpu was a good idea especially if it pulls away from other resources on the chip, I understand what adding the northbridge to the processor does, but does it turn off the northbridge thats already on the motherboard? The northbridge on the chip makes sense for an APU but not for a perfomance CPU, why is the nothbridge even in there. I myself would rather see the northbridge on the motherboard utilizing that space intstead of the space on the cpu.If there isnt a way to turn off the northbridge on the cpu in the bios, i think the motherboard manufactures should include the ability to turn off the northbridge on the cpu. Add the ability to use the onboard northbridge in there bios, so you can atleast get bios or firmware updates to the northbridge and perhaps get more performance out of the cpu/gpu.
When the new Radeon 7000 series video cards come out, if I buy this CPU with the 6000 series northbridge in it, am I going to take a performance hit or am i going to have to buy a new processor with the 7000 series northbridge in it? or will they come out with a 7000 series motherboard that utilizes a 7000 series northbridge that turns off the 6000 series northbridge in the chip, which in turn makes it useless anyways. I myself dont like the fact if i buy this product, if i want to upgrade my northbridge/ motherboard, I might have to buy a new processor/ perhaps a new motherboard or am i just paranoid or not understanding something.
Who knows, maybe in the next couple of weeks, Mcrosoft and/or AMD will come out with a performance driver for the new processors.
If they would have come out with this processor when planned originally, it really would have kicked butt. instead we get conglimerated ideas over the five year period, which looks like the beginning idea, thrown into a 2011 die.
I am i die-hard AMD fanboy and always will be, Just kinda dissappointed, excuse my rants. I will be buying a 4 core when they hit the streets, hopefully in a couple weeks.
saneblane - Thursday, October 13, 2011 - link
From the caching issues, to the bad glofo process, to the windows scheduler, i recon that this processor wasn't ready for prime time. Amd didn't have any choice i mean they almost took an entire year extra for peet sake. Even though my i5 2500 is on it's way, am not stupid enough to believe this is the best the arch can do. Their is a good reason that interlagos cannot be bought in stores, Amd know for a fact that they cannot sell this cpu to server maker, so they are busy working on it, i expect that it might take one or even 2 more stepping to fix this processor, the multithread performance is their so they only need to get a mature 32nm process to crank up the speeds and maintain the power consumptions. IMOarjuna1 - Thursday, October 13, 2011 - link
Reviews @ other sites like toms hardware and guru 3d are starting to make this look bad. How come everyone but Anand got to review it with watercooling?? Is this site in such bad terms with AMD?B3an - Thursday, October 13, 2011 - link
Water cooling isn't magically going to help performance or power consumption in any way so why does it matter?? When you buy this CPU it comes with air cooling, and Anand was right to use that for this review.marcelormt - Thursday, October 13, 2011 - link
http://www.tomshardware.com/reviews/does-amds-athl...Patrick: The 6000+ is the fastest Athlon 64 X2 dual core processor ever, but what happened to the FX family?
Damon: Patrick, you are right. The X2 6000+ is the fastest AMD64 dual-core processor ever... so why isn't it called FX? To answer that I have to explain what FX is all about... pushing the boundaries of desktop PCs. FX-51 did that right out of the gate, with multiple advantages over other AMD processors, and a clear lead on the competition. Move forward a bit to where AMD put high-performance, native dual-core computing into a single socket with the FX-60. Fast forward again and you see FX pushing new boundaries as "4x4" delivers four high-performance cores with a direct-connect, SLI platform that is ready to be upgraded to 8 cores later this year
Ryomitomo - Thursday, October 13, 2011 - link
I'm a little surprised you only posted Win7/Win8 comparison figures for FX-8150. It would give a much complete picture if you would also post i7-2600k Win7/Win8 comparison.czerro - Thursday, October 13, 2011 - link
I think anand handled this review fine. Bulldozer is a little underwhelming, but we still don't know where the platform is going to go from here. Is everyone's memory so short term that they don't remember the rocky SandyBridge start?nofumble62 - Thursday, October 13, 2011 - link
Crappy building block will mean crappy building.richaron - Friday, October 14, 2011 - link
At first I was pissed off by being strung along for this pile of tripe. After sleeping on it, I am not completely giving up on this SERVER CHIP:1) FX is a performance moniker, scratch stupid amount of cache & crank clock
2) I'm sure these numbties can get single thread up to thuban levels
3) Patch windows scheduler ffs
Fix those (relatively simple) things & it will kick ass. But it means most enthusiasts wont be spending money on AMD for a while yet.
7Enigma - Friday, October 14, 2011 - link
Biggest problem for a server chip is the load power levels. It just doesn't compete on that benchmark and one in which is VERY important for a server environment from a cost/heat standpoint.Let's hope that's just a crappy leaky chip due to manufacturing but it's to early to tell.
richaron - Friday, October 14, 2011 - link
I've worked in a 'server environment'. of course power consumption is an issue. at the lower clock speeds & considering multithread performance, this is already a good/great contender. virtual servers & scientific computing this is already a winnar.with a few (hardware & software) tweaks it could be a GREAT pc chip in the long term.
ryansh - Friday, October 14, 2011 - link
Anyone have a BETA copy of WIN8 to see if BD's performance increases like AMD says it will.silverblue - Friday, October 14, 2011 - link
There's benchmarks here and there but nothing to say it'll improve performance more than 10% across the board. In any case, the competition also benefits from Windows 8, so it's still not a sign of AMD closing any sort of gap in a tangible fashion.Pipperox - Friday, October 14, 2011 - link
But Bulldozer is different.Windows 7 scheduler does not have a clue about its "modules" and "cores".
So for example it may find it perfectly legit to schedule 2 FP intensive threads to the same module.
Instead this will result in reduced performance on Bulldozer.
Also one may want to schedule two integer threads which share the same memory space to the same module, instead of 2 different modules.
This way the two threads can share the same L2 cache, instead of having to go to the L3 which would increase latency.
All of the above does not apply to Thuban; to a lesser degree it applies to Sandy Bridge, but Windows 7 scheduler is already aware of Sandy Bridge's architecture.
nirmv - Saturday, October 15, 2011 - link
Pipperox, It's not different than Intel's Hyper Threading.Pipperox - Sunday, October 16, 2011 - link
It is, although they're similar concepts.Let's make an example: you have 2 integer threads working on the same address space (for example two parallel threads working in the same process).
All cores are idle.
What is the best scheduling for a Hyperthreading cpu?
You schedule each thread to a different core, so that they can enjoy full execution resources.
What is best on Bulldozer?
You schedule them to the SAME module.
This because the execution resources are split in a BD module, so there would be no advantage to schedule the threads to different modules.
HOWEVER if the 2 threads are on the same module, they can share the L2 cache instead of the L3 cache on BD, so they enjoy lower memory latency and higher bandwidth.
There are cases where the above is not true, of course.
But my example shows that optimal scheduling for Hyperthreading can be SUB-optimal on Bulldozer.
Hence the need for a Bulldozer-aware scheduler in Windows 8.
Regs - Friday, October 14, 2011 - link
AMD needs a 40-50% performance gain and they're not going to see it using windows 8. What AMD needs is...actually I have no clue what the need. I've never been so dumbfounded about a product that makes no sense or has any position in the market.Pipperox - Saturday, October 15, 2011 - link
With a 40-50% gain Bulldozer would be even ahead of Ivy Bridge.. and what comes next.Or are we still talking about SuperPI?
Or games run at 640x480 lowest quality settings?
The fact is, almost all single threaded applications are old and they run already super fast on ANY cpu and the difference can be seen only in benchmarks.
All recent performance demanding applications are properly multithreaded, and Bulldozer there is competitive with i5 2500 and occasionally with i7 2600 (and with a 10% boost Bulldozer would be competitive with i7 2600).
And this will become more and more the standard one year from now.
Sure Bulldozer has not met the enthusiasts' expectations, it doesn't perform as people would expect an "octacore" (but it's not, it's just a quad with a different form of hyperthreading and "clever" marketing) and it doesn't deserve the FX moniker.
But still it's the most competitive CPU AMD has launched in years, perhaps with the exception of Zacate.
nirmv - Saturday, October 15, 2011 - link
Not all applications are heaviliy multi-threaded, there is still need to improve single thread performance.And even for those few loads that are competitive in performance, they do it with twice the power draw.
See here from xbitlabs review :
http://www.xbitlabs.com/images/cpu/amd-fx-8150/pow...
Pipperox - Sunday, October 16, 2011 - link
But increasing single threaded performance has a cost, on die space and circuit complexity.Bulldozer has a huge die just because it has enormous caches (8MB L2 vs 1Mb on SandyBridge) which probably will turn useful on server workloads (but that's just a guess).
By looking at the die shot, you'd get a 40% die area reduction with "normal" caches.
So AMD engineers decided to drop single threaded performance improvements in favor of higher multithreaded scalability and higher clock speed scalability.
We'll see if in the long run this will pay off.
I agree power consumption doesn't look good in comparison with Intel, but it does look good in comparison to Thuban.
This is the first released silicon of Bulldozer.. i expect power consumption to improve with newer steppings and silicon process tuning.
That being said, Intel has the best silicon process in the whole industry.
AMD can't compete with that.
But i'd guess that at lower clock speeds (like in server), AMD's power consumption will improve a lot.
Looks like with the FX AMD tried to push their current silicon to the maximum which they could (within the 125W TDP which is sort of an industry standard).
LiveandEnjoyLife - Friday, October 14, 2011 - link
Some people are missing the point. At this stage in the game, processor speed is a moot point beyond benchmarks. AMD and Intel make very fast CPUs in relation to what gamers and every day users use them for. Intel CPUs are blazing fast and AMD CPUs are fast. The average Joe does need more than a dual-core CPU. If you were going to actually do something that would require heavy multi-threading, then it comes down to the efficiency the app to make use of the cores and the ability to use hyperthreading. If you wanted the most performance for a mult-threaded application, you would pick more physical cores over virtual cores. So for most of use it comes down to bang for buck.8 cores is better than 2, 4, or 6 for true multi-threaded capable applications.
For speed tests Intel wins hands down.
If you were sitting next to someone playing a game and all things were the same except CPU, you would not be able to tell which machine is running what CPU. However you would notice if one costs significantly more than the other.
That is my 4 cents.
7Enigma - Friday, October 14, 2011 - link
Hi Anand,Great review but there is a text error when referring to pass one vs. pass two of the benchmark mentioned in the Subject line. You said:
"The standings don't change too much in the second pass, the frame rates are simply higher across the board. The FX-8150 is an x86 transcoding beast though, roughly equalling Intel's Core i7 2600K. Although not depicted here, the performance using the AMD XOP codepath was virtually identical to the AVX results."
But the graph clearly shows a complete flip-flop from first pass to second pass. When I look closely it appears you ordered the text and graphs differently and were referring to if you had the non-AVX and AVX-enabled graphs next to each other instead of in separate sections. Basically the text and graphs don't match up.
HTH
Iketh - Friday, October 14, 2011 - link
You're an utter retard. The reason they're sold out is newegg advertised these nicely all over their site, including the front page, with "World's first 8-core desktop processor."There are plenty of reasons to purchase these processors aside from their performance and that's ok. But the majority bought them thinking they're gonna "rock", and those are the ones "showing intelligence." Same goes to you for thinking the majority is well-informed/intelligent.
What's even worse, the 8-core version for sale is the 3.1ghz, not the 3.6 tested in this review. I'm seriously LOL'ing...
How many did Newegg have in stock anyhow? Wouldn't that figure matter regarding your ignorant comment?
rcrossw - Friday, October 14, 2011 - link
I have used AMD products for years. I use Intel at work. So to me there is no real difference between the two for what Business and the Average Computer user want or expect.Does it run, does it do the work I require of it, and do my programs and Network Access
work well and are reliable?
Intel indeed has incredible Processors, fast and reliable, and in the high end - expensive.
AMD is Low and Mid range - with processors that the average person can afford. Who is the most innovative - both. Today Intel has been , now I think the user needs to give this New X86-64 Architecture a chance.
I have a Asus M5A99x EVO with an FX6100 installed. The only problem I have had is having to upgrade the BIOS to accept the new Processor. So far I have had the Processor to 4.2 Ghz. Though AIDA 64 caused a BOD on one test. At 3.8 Ghz runs like a champ. Stil
working back to as close as I can get to 4.2 on Air.
After three years I have retired my old Phenon II Tri Core 720 for this, and it works for me.
I am not an extreme gamer, etc. But test it your self, before being too overly critical.
Does it work for me.
As an aside, next a SSD for faster response.
For those interested:
Asus M5A99x MB BIOS 0810 ( Newest)
AMD FX 6100 at 3.8 ghz
Corsair Vengeance 1600 - 16 gigs
HIS Radeon HD 6850
Windows 7 Ulimate 64
HPLP2475W Monitor at 1920x1200 DP
WD 500 SATA
WD 1001 SATA
LG H20L BD-R
Plextor DVDR
Enermax 620 Liberty PS - I know old but works.
Thanks
davbran - Saturday, October 15, 2011 - link
I have been having a hard time writing a comment on this topic without drawing fire from trolls.This review is hogwash without more information.
If the hardware is the same on all test machines, apart from the CPUs, then there is no wonder the performance was so bad. 6 Cores are going to utilize, and I am just pulling a number out of my ... hat, 4gb of RAM more efficiently than an 8 core using simple kitchen math. No need to break out the slide rules. It's a known fact, to most, that the big bottleneck in the multicore/multiprocessor world is memory. Mind you that's if we are factoring in that all the code that was used for testing purposes was written with multi-threading in mind.
You just can't compare apples to bananas in this manner.
silverblue - Saturday, October 15, 2011 - link
Each to their own. I thought it was a pretty good review, and Anand certainly held back from slating AMD to hell.Iketh - Sunday, October 16, 2011 - link
LOLLLLLThaHeretic - Saturday, October 15, 2011 - link
Here's something for a compile test: build the Linux kernel. Something people actually care about.Loki726 - Monday, October 31, 2011 - link
The linux kernel is more or less straight C with a little assembly; it is much easier on a compiler frontend and more likely to stress the backend optimizers and code generators.Chromium is much more representative of a modern C++ codebase. At least, it is more relevant to me.
nyran125 - Saturday, October 15, 2011 - link
Whats the point in having 8 cores, if its not even as fast as an intel 4 core and you get better performance overall with intel.. Heres the BIG reality, the high end 8 core is not that much cheaper than a 2600K. Liek $20-60 MAX> Youd be crazy to buy an 8 core for the same price as an intel 2600K...LIKE MAD!!!
Fiontar - Saturday, October 15, 2011 - link
Well, these numbers are pretty dismal all around. Maybe as the architecture and the process mature, this design will start to shine, but for the first generation, the results are very disappointing.As someone who is running a Phenom II X6 at a non-turbo core 4.0 Ghz, air cooled, I just don't see why I would want to upgrade. If I got lucky and got a BD overclock to 4.6 Ghz, I might get a single digit % increase in performance over my Phenom II X6, which is not worth the cost or effort.
I guess on the plus side, my Phenom II was a good upgrade investment. Unless I'm tempted to upgrade to an Intel set up in the near future, I think I can expect to get another year or two from my Phenom II before I start to see upgrade options that make sense. (I usually wait to upgrade my CPU until I can expect about a 40% increase in performance over my current system at a reasonable price).
I hope AMD is able to remain competitive with NVidia in the GPU space, because they just aren't making it in the CPU space.
BTW, if the BD can reliably be overclocked to to 4.5Ghz+, why are they only selling them at 3.3 Ghz? I'm guessing because the added power requirements then make them look bad on power consumption and performance per watt, which seems to be trumping pure performance as a goal for their CPU releases.
Fiontar - Saturday, October 15, 2011 - link
A big thumbs down to Anand for not posting any of the over-clock benchmarks. He ran them, why not include them in the review?With the BD running at an air cooled 4.5 Ghz, or a water cooled 5.0 Ghz, both a significant boost over the default clock speed, the OC benchmarks are more important to a lot of enthusiasts than the base numbers. In the article you say you ran the benchmarks on the OC part, why didn't you include them in your charts? Or at least some details in the section of the article on the Over-clock? You tell us how high you managed to over-clock the BD and under what conditions, but you gave us zero input on the payoff!
Oscarcharliezulu - Saturday, October 15, 2011 - link
...was going to upgrade my old amd3 system to a BD, just a dev box, but I think a phenom x6 or 955 will be just fine. Bit sad too.nhenk--1 - Sunday, October 16, 2011 - link
I think Anand hit the nail on the head mentioning that clock frequency is the major limitation of this chip. AMD even stated that they were targeting a 30% frequency boost. A 30% frequency increase over a 3.2 GHz Phenom II (AM3 launch frequency i think) would be 4.2 GHz, 17% faster than the 3.6 GHz 8150.If AMD really did make this chip to scale linearly to frequency increases, and you add 17% performance to any of the benchmarks, BD would roughly match the i7. This was probably the initial intention at AMD. Instead the gigantic die, and limitations of 32nm geometries shot heat and power through the roof, and that extra 17% is simply out of reach.
I am an AMD fan, but at this point we have to accept that we (consumers) are not a priority. AMD has been bleeding share in the server space where margins are high, and where this chip will probably do quite well. We bashed Barcelona at release too (I was still dumb enough to buy one), but it was a relative success in the server market.
AMD needs to secure its spot in the server space if it wants to survive long term. 5 years from now we will all be connecting to iCloud with our ARM powered Macbook Vapor thin client laptops, and a server will do all of the processing for us. I will probably shed a tear when that happens, I like building PCs. Maybe I'll start building my own dedicated servers.
The review looked fair to me, seems like Anand is trying very hard to be objective.
neotiger - Monday, October 17, 2011 - link
"server space where margins are high, and where this chip will probably do quite well."I don't see how Bulldozer could possibly do well in the server space. Did you see the numbers on power consumption? Yikes.
For servers power consumption is far more important than it is in the consumer space. And BD draws about TWICE as much power as Sandy Bridge does while performs worse.
BD is going to fail worse in the server space than it will in the consumer space.
silverblue - Monday, October 17, 2011 - link
I'm not sure that I agree.For a start, you're far more likely to see heavily threaded workloads on servers than in the consumer space. Bulldozer does far better here than with lightly threaded workloads and even the 8150 often exceeds the i7-2600K under such conditions, so the potential is there for it to be a monster in the server space. Secondly, if Interlagos noticably improves performance over Magny Cours then coupled with the fact that you only need the Interlagos CPU to pop into your G34 system means this should be an upgrade. Finally, power consumption is only really an issue with Bulldozer when you're overclocking. Sure, Zambezi is a hungrier chip, but remember that it's got a hell of a lot more cache and execution hardware under the bonnet. Under the right circumstances, it should crush Thuban, though admittedly we expected more than just "under the right circumstances".
I know very little about servers (obviously), however I am looking forward to Johan's review; it'd be good to see this thing perform to its strengths.
neotiger - Monday, October 17, 2011 - link
First, in the server space BD isn't competing with i7-2600K. You have to remember that all the current Sandy Bridge i7 waste a big chunk of silicon real estate on GPU, which is useless in servers. In 3 weeks Intel is releasing the 6 core version of SB, essentially take the transistors that have been used for GPU and turn them into 2 extra cores.Even in highly threaded workloads 8150 performs more or less the same level as i7-2600K. In 3 weeks SB will increase threaded performance by 50% (going from 4 cores to 6). Once again the performance gap between SB and BD will be huge, in both single-threaded and multi-threaded workloads.
Second, BD draws much higher power than SB even in stock frequency. This is born out by the benchmark data in the article.
silverblue - Monday, October 17, 2011 - link
The reason for mentioning the 2600 is because that's the only comparison we have for the moment. I don't expect Valencia to use as much power as Zambezi even on a clock-for-clock basis.Pipperox - Monday, October 17, 2011 - link
That has to be seen.Bulldozer's HUGE 16MB of total cache surely were not put there for desktop workloads.
xineis - Sunday, October 16, 2011 - link
I am quite disappointed with the overall performance. And specially with gaming performance.But what about that rumored patch fix for the BDZ? Has any of you guys at AnandTech heard of that?
silverblue - Sunday, October 16, 2011 - link
You mean this one?http://quinetiam.com/?p=2356
Interesting if it's true, however it's doubtful to help across the board (pun not intended). Still, very good news for those doing rendering/encoding.
richaron - Sunday, October 16, 2011 - link
I never want to see a link to that dickbag again..His blogs about WW3, bigfoot, & 2012 should be enough to give you an idea. Also, the dudes at Kubuntu basically do repackaging & KDE integration. They don't touch hardware.
No doubt there can be significant gains through software. But I would rather stab myself through the eyelid than read anything more by that 'person'.
xineis - Sunday, October 16, 2011 - link
Yeah, that one. I know that there is a very slim chance of that 40% boost ever happening, but the idea behind the "patch" is actually plausible.I've seen lots of people explaning how would it work and it looks somewaht legit.
Granted that 40% is too much, but if the right tweaking, I'd be more than happy with 20% to 30% increase in performance!
richaron - Monday, October 17, 2011 - link
everyone would be happy with 20-30%. It doesn't seem realistic to me... but i dropped out of computer engineering :pFrom what I've seen, I would guess about 5% [+/- 5%]
silverblue - Monday, October 17, 2011 - link
Turns out he was one giant troll, or seemingly so. Now, he's pointing the finger at..."Problem solved, it’s just a thermal protection issue, people have been pushing voltages too high. Maybe there’s some variance in mainboard chipsets, but some overclockers and hitting really good numbers."
Really? After all that hoo-har about registry patches, BIOS flashes and the like, we're now blaming thermal protection? I'm taking this with a litre of Dead Sea water.
silverblue - Monday, October 17, 2011 - link
Hey, there has to be somebody somewhere that we can all laugh at. ;)That said, he does sound a bit... odd.
Romulous - Monday, October 17, 2011 - link
perhaps Bulldozer.Next will actually work ;)TiGr1982 - Monday, October 17, 2011 - link
Indeed, much much better performance was expected from BD. I was an AMD focused PC buyer since 2005, at AMD "golden age", when I purchased AMD Turion-based laptop. That CPU was actually better than the corresponding Intel competitor at the moment - Pentium M Dothan, as probaly some people remember.We know the rest of the story since then till now...
But the released BD-based product in its current state seems to be barely concurrent at all on the desktop market. Presumably, its popularity will be much lower, than in case of previous Phenom II lineup...
TiGr1982 - Monday, October 17, 2011 - link
By "concurrent", I actually meant "competitive".psiboy - Tuesday, October 18, 2011 - link
Why are there no benchmarks with it overclocked... especially gaming? Would be relevant as these processors are shipping unlocked as standard.. all I'm asking for is a reasonable overclock on air to be included...eldemoledor25 - Tuesday, October 18, 2011 - link
I think they rushed all wanting to position their review as the first, if you read the other post of the network goes bullozer better positioned than the i7 2600K in many things over which a pricipio dicen.el problem was in the bios the asus and gigabyte motherboards, released immature bios fact overclock would hold more, as you may ASRock and MSI makes a bulldozer to 4.6 ghz be better than the i7 and i5 5.2GHz oc do not believe me check this and read well.1.-http: / / www.madboxpc.com/foro/topic/161318-la-verdad-sobre-el-amd-fxo-bulldozer/page__st__20
Greetings to all!!!
Martin281 - Wednesday, October 19, 2011 - link
Well, the situation among AMDs CPU is still the same...good ideas, great expectitions and manufacturing delays resulting in inappropriate results compared to Intel. Bulldozer would have been a way competitive 2 years ago, not these days. At this point AMD desperately needs way higher clock speeds and core optimizations to be competitive..the predicted 10-15% performance per watt increasing each year is really funny when compared to planned intel´s cpu roadmap (just known information that 1Q/2012 to-be-introduce ivy bridge´s TDP in top performance class is to drop from 95W to 77W, that is almost 20% only in power consumption - not to mention performance boost caused also by 22nm manufacturing process). I am worried, that the performance gap between intel and AMD cpus is going to broaden in the near future without "any light in the darkness bringing the true competition in the CPU field".siniranji - Wednesday, October 19, 2011 - link
waiting for the BD to come, but now, what a disappointment, but AMD should continue to compete with intel, otherwise, there wont be any battle to watch. I love to see a good pricing from AMD.loa567 - Wednesday, October 19, 2011 - link
I think you are wrong on one point, about the FPU. You claim that one bulldozer module has the same FP capacity as earlier AMD processors. However, in reality it has twice the (theoretical) capacity Whereas each K8/K10 core had one 128-bit FP unit, each bulldozer module has 2 x 128 bit FP units. They can work together as one 256-bit, when used with the new instructions (AVX and others). See for example this page for details: http://blogs.amd.com/work/2010/10/25/the-new-flex-...However, it is strange that this does not show in performance. Could anyone explain this to me?
Pipperox - Thursday, October 20, 2011 - link
It does show on performance.In SiSoft Sandra, 4 Bulldozer modules easily beat 6 Thuban cores.
Same goes for floating point intensive rendering tasks such as Cinebench and 3dsMax.
beck2050 - Friday, October 21, 2011 - link
"in single threaded apps a good 40-50% advantage the i5 2500K enjoys over the FX-8150."These are the apps most people use, duh.
core for core Bulldozer is epic fail. This is not going to be a popular desktop chip at all. As for servers, AMD's share has dropped from 20% to 5,5% in the last few years. I doubt this chip will be the savior.
richaron - Friday, October 21, 2011 - link
They have lost ground in the server market, so a radical new design wont make a difference...? I admire your logic.For the record I specifically look for programs/games which are multithreading, it often shows good programming on the whole. Unless of course there are other factors limiting the system (like net speed, or gpu). Perhaps I'm just ahead of the curve compared to you're average troll, duh.
Gasaraki88 - Friday, October 21, 2011 - link
I think the main competitor for Intel in the future is going to be the ARM processor makers. As Intel goes in to that space with the x86 and the ARM chips getting faster and faster and Windows 8 supporting ARM, you get a mix and soon ARM chip will invade the desktop/laptop market.AMD is done.
ppro - Monday, October 24, 2011 - link
I decided to get this new cpu from www.amd.comnavair2 - Monday, October 24, 2011 - link
I decided to try AMD when I "inherited" my brother's older socket 939 hardware some years ago, then built my own using a Phenom II X4 940 BE.At the time it was released, the 940 wasn't too far behind the i7 920 in many respects, plus it was about $70 cheaper...I was very satisfied with my decision. However, after 3 years of advancement by both companies and watching Intel ONCE AGAIN come up with something that gives excellent performance with ever-increasing power reduction, I was on the fence about Bulldozer even before the reviews came out.
Once I saw the majority of the reviews, I knew what side of the fence to be on for obvious reasons..."Bulldozer" just didn't hit the expectations I thought it should, especially when it comes to load power consumption. Perhaps in a couple years when it matures, but I didn't feel like waiting for AMD to iron out all the wrinkles.
My next build is already done and sorry to say, it's NOT AMD. For what I do the i5-2500K is just too good to pass up at combo prices that result in a $200 processor ( less than what I payed for my X4 940 when IT was new).
Best wishes AMD, I hope you can make "Bulldozer" work, but for now "BD" stands for "Big Disappointment". I'll check back with you in a year or so to see how things are doing.
johnsmith9875 - Tuesday, October 25, 2011 - link
Intel's first "dual core" was actually 2 processors on one chip.They could have saved a lot of engineering time by merely shoehorning two X6 Thuban processors together at 32nm and sell it as a 12 core. Now that would have rocked!
Poxenium - Wednesday, October 26, 2011 - link
Does anybody remember the first Intel processors with the entirely new architecture called Core Duo, Conroe-L or something ? They were pretty lousy at first, with slightly higher performance than the previous generation, but constantly overheating. later the Core 2 Duo was a complete success, not to mention the first generation iCore processors and of course Sandy Bridge.Considering the fact that these Bulldozer processors are AMD's first attempt at a completely new architecture, I say that both performance and power consumption are at reasonable levels. Upcoming models will surely do a lot better.
shbdf - Friday, November 4, 2011 - link
== ( http://www.amtata.com )====
== ( http://www.amtata.com )====
Handbags(Coach l v f e n d i d&g) $35
Tshirts (Polo ,ed hardy,lacoste) $15
Wolfpup - Friday, November 4, 2011 - link
I'd love to see how well the SMP client runs on an "8 core" Bulldozer part compared with a quad Sandy Bridge, and for that matter a 6 core Phenom 2 and 6 core Nehalam.It SEEMS like it should do really well, right? Or not? Because basically an 8 core Bulldozer is a quad core when it comes to floating point, right? And Folding uses a lot of floating point? Or...?
Also, if it really has double the transistor count of Sandy Bridge...where is the performance? It seems like even on heavily threaded stuff it's just kind of about equal with Sandy Bridge, which doesn't seem right....
JumpingJack - Sunday, November 6, 2011 - link
Except for the fact even with that 'professional' software the competitor is just as fast or faster consuming 30% less power.it is unfortunate for AMD and their fan base bit BD is definitely a dud.
JumpingJack - Sunday, November 6, 2011 - link
Considering the power consumption and the reported problems with many games, e.g. Dues Ex, Portal 2, Shogun ... I would see this more appealing around the $180 mark. The 1100T is a better buy if you must do AMD.JumpingJack - Sunday, November 6, 2011 - link
The statement is partially true. There are quite a few apps that the Thuban outnguns BD and many cases where it out performs on energy effiency as well.Elric42 - Thursday, December 1, 2011 - link
I wanted to say one thing i dont have one but a friend of mine does and he showed me somthing my i5 cant do he was playing a game called crissis if thats how u spell it and running a video editting program at he same time well i cant do that with my i5 if i did the game would start to lag crissis takes alot out of your cpu bad programing even video cards have trouble with the game but bd seems to muti task better then what my i5 can do just wondering if its more for peeps who do alot of stuff at one time.ZyferXY - Monday, January 2, 2012 - link
Thanks for pointing that out because not so long ago i saw a video on amd's web site where they were showing of a amd Llano notebook vs a intel sandy bridge core i7 notebook they started the same benchmark on both notebooks and the intel was quite fast but as they open more and more programs at the same time the intel starts to drop in performance where the amd is running stable. So my suggestion would be to run all benchmark on the bulldozer and i7 2600k again but this time open about 10 or 20 other programs a the same time then u will truly see the bulldozer shine. I am not a amd fanboy my current build a intel Pentium G860 and i am very dissapointed in myself i shouldve gone with the amd q640 it was around the same price when i bought it. My next build will be a Amd FX4100. HAmakaira - Thursday, December 8, 2011 - link
Well I very excitedly bought a 8150 based system for number crunching as the performance/$ looked very good. I could buy a "quiet" system for Aus $ 1130 with SSD and only 8Gb RAM.I had previously purchased a Intel i7 2600K, but could never get it to overclock and run 64 bit Java app (Napoleon Spike from DUG) 24/7, it fell over after 6 hrs or 12 or 23 or 47, it always fell over despite water cooling.
Now the bulk of my work is done by Xeons in the rack, with a couple of dual 5680's systems doing the heavy lifting (2 x 6 core + hyperthreading looks like 24 CPU's to OS). These are good stable systems with 96Gb RAM, but high overall system cost.
I wanted a few cheap and moveable fast CPU's. Boy did the Bulldozer fail to deliver
More is Better measure in Bytes inversion throughput/minute
BD 8150 115-123k in 8/8 threads i.e. flat out
i7 2600 237-268k in 8/8 threads i.e. flat out
Xeon dual 5680 333-356k in 12/24 threads i.e.half loaded
i7-870 166k in 8/8 threads i.e flat out
Xeon Dual E5520 190k 12/16 threads
Xeon Dual 5430 132k 8/8 threads
The Bulldozer is the slowest and the newest....very poor performance. Eclipsed by Intel at similiar price point. I might as well replace the MB and CPU and go with i73960 or 3930...
wepexpert117 - Thursday, December 8, 2011 - link
I dunno if anyone noticed, but if u study the architectures carefully, then what AMD calls as a 'module' is comparable to a 'core' of Intels. Intels Hyperthreading allows two logical thread executions per core. But AMD's TruCore theory, only allows one thread per core. The Intel i5-2500K has 4 physical cores and 8 logical threads. Compared to that the most powerful of the AMD, the FX-8170, contains 4 modules which can execute 8 threads, with 2 cores per module, each core executing 1 thread. On the other hand the i7-2600K contains 6 physical cores and 12 logical threads. Hence by no chance, can the FX-8150, can match the capability of the 2600K, as the latter as 2 more cores to add to the power. As for the results of the benchmarking, it also agrees with the fact that the FX-8150 is comparable albeit a little less powerful than the i5-2500K, because of the architecture difference between Intels core and AMD's Bulldozer.If AMD ever brings out (according to them) a 12 core FX processor (Prob. FX-12XXX), then it would be really interesting to see how that matches with the i7-2600K. Altough the shared L2 cache architecture, is what may be detrimental to the performance of these processors.Jondenmark - Saturday, December 24, 2011 - link
Something is wrong. If I look at a die shot of Llano then the core is about 1½ times the size of the 1 MB L2 cache. If I look at a Bulldozer module, it is about 1½ times the 2 MB L2 cache. To me this indicates, that a Buldozer module is about 100% larger than a phenom II core which is far from the 12% more core size, which AMD has previously indicated was the cost of adding another core to form a module. The 12% was expected to allow AMD to add nearly double the core count on a given process node to convince the server market and give plenty of die space for the GPU on the Llano APU. Where am I wrong and what is right?8 core cpu - Friday, January 6, 2012 - link
This <a herf="http://8corecpu.com/">8 Core Cpu</a> is high spreed CPU. It is best than other CPU8 core cpu - Friday, January 6, 2012 - link
This 8 Core Cpu is high spreed CPU. It is best than other CPU. For more info please ....http://8corecpu.com/
Raven0628 - Saturday, January 14, 2012 - link
I beleive amd realy missed it shoot badly, but it is still the right social choice caus what will happen if intel get x86 monopol and they are still resonably priced and whene you have to live with it in every day life will you realy notice the diferance in perfomance. Unless you realy to go for all the top of the line in every part of your system you will got for the top of intel i7.But i'v never did and alway ended up with reliable good perfomance amd sys for less than 800$ counting with the power supply i had to replace. this year. my point unless you want a death machine go for amd and you will feel better with your self ;).
PS. sry for the terible english.
Ernst0 - Sunday, February 5, 2012 - link
Hey guys.There is no doubt that whatever critiques have been posted are valid but I skimmed a few pages and saw no "Consumer" comments.
I have purchased an 8150 with a AMD3+ motherboard and will be putting the unit together.
In my days since the Z80 and 48k this represents the nicest cpu ever for me.
That it was affordable and that I will have 8 cores to task with my hobby programming such as trying to factor RSA-numbers or the ilk the AMD 8=core is a dream system for the price.
I picked up case, mother board power supply, 1.5 TB drive DVD, 1 gb video, 16 gb ram, 28 inch monitor, wall mount for monitor so I can have two 28's with one the long way for source code and perhaps something else.. Anyway $1200 is the cost.
Now this is my first bare-bones experience too so all in all it is exciting to get such a dream machine and I am happy to step forward and support AMD
I don't know what awaits when the memory arrives and I boot up but it feels like Starship already and I have vowed to learn OpenMP under GCC to advance into multi-core programming.
So perhaps there will be issues. perhaps this is not all that nor is it wat will come but from where I am at I am still on the AMD home team and my money is flowing in the economy.
I went from trs 80 to Amiga then to twin AMD single core chips on one Motherboard, Moved to the early quad cores dreaming of dual quad cores when a system with 8 cores of that day would have cost $4900 and now picked up a system that as a boy in 1973 I would have considered Alien-ufo technology for about what I paid for dual single core chips just a few years ago.
So BullDozer can't be all that bad. The price is good! I will see how she runs. I often peg cores at 100% for days when searching for RSA factors.. Looks like I get more bang for the same bucks this time and I am all for that.
Thank you AMD for such a wonderful cpu. I plan to make use and thanks to the motherboard I can watch out for heat issues much easier than ever,
Not to mention it looks like the sound system is way advanced over the last computer as well.
So from a consumer / hobby programmer point of view this is very cool indeed.
Ernst
mumbles - Sunday, February 12, 2012 - link
Thank you for being the first to actually contribute some real world response to this architecture. So many trolls on this thread that are intel fanboys.Also, if your using xen with this thing, I would be interested in seeing some feedback on how multiple guests(like more than 4) act when trying to fight for floating point processor time. Be interesting also to see if 4 floating point threads and 4 integer threads can all run at the same time with no waiting. That might be asking too much for now tho.
psiboy - Monday, February 6, 2012 - link
What kind of retarded person would benchmark at 1024 x 768 on an enthusiast site where every one owns at least 1 1920 x 1080 monitor as they are 1. Dirt cheap and 2. The single biggest selling resolution for quite some time now... Real world across the board benches at 1920 x 1080 please!mumbles - Sunday, February 12, 2012 - link
I am not trying to discount the reviewer, the performance of Sandy Bridge, or games as a test of general application performance. I have no connection to company mentioned really anywhere on this site. I am just a software engineer with a degree in computer science who wants to let the world know why these metrics are not a good way to measure relative performance of different architectures.The world has changed drastically in the hardware world and the software world has no chance to keep up with it these days. Developing software implementations that utilize multiprocessors efficiently is extremely expensive and usually is not prioritized very well these days. Business requirements are the primary driver in even the gaming industry and "performs well enough on high end equipment(or in the business application world, on whatever equipment is available)" is almost always as good as a software engineer will be allowed time for on any task.
In performance minded sectors like gaming development and scientific computing, this results in implementations that are specific to hardware architectures that come from whatever company decides to sponsor the project. nVidia and Intel tend to be the ones that engage in these activities most of the time. Testing an application on a platform it was designed for will always yield better results than testing it on a new platform that nobody has had access to even develop software on. This results in a biased performance metric anytime a game is used as a benchmark.
In business applications, the concurrency is abstracted out of the engineer's view. We use messaging frameworks to process many small requests without having to manage concurrency at all. This is partly due to the business requirements changing so fast that optimizing anything results in it being replaced by something else instead. The underlying frameworks are typically optimized for abstraction instead of performance and are not intended to make use of any given hardware architecture. Obviously almost all of these systems use Java to achieve this, which is great because JIT takes care of optimizing things in real time for the hardware it is running on and the operations the software uses.
As games are developed for this architecture it will probably see far better benchmark results than the i series in those games which will actually be optimized for it.
A better approach to testing these architectures would be to develop tests that actually utilize the strengths of the new design rather than see how software optimized for some other architecture will perform. This is probably way more than an e-mag can afford to do, but I feel an injustice is being done here based on reading other people's comments that seem to put stock in this review as indication of actual performance of this architecture in the future, which really none of these tests indicate.
I bet this architecture actually does amazing things when running Java applications. Business application servers and gaming alike. Java makes heavy use of integer processing and concurrency, and this processor seems highly geared towards both.
And I just have to add, CINEBENCH is probably almost 100% floating point operations. This is probably why the Bulldozer does not perform any better than the Phenom II x4.
Also, AMD continues to impress on the value measurement. Check out the PassMarks per dollar on this bad boy:
http://www.cpubenchmark.net/cpu.php?cpu=AMD+FX-815...
djangry - Sunday, February 19, 2012 - link
Beware !!!! this chip is junk.I love Amd with all my heart and soul.
This fx chip is a black screen machine.
It breaks my heart to write this.
I am sending it back and trying to snag the last x6 phenom 2 's
I can find.
The fact that this chip is a dud is too well hidden.
When I called newegg they told me your the second one today with
horror stories about this chip.
msi would not come clean ...this chip is a turkey....
yet they were nice.
I will waste no more time with this nonsense.
my 754's work better.
We need honesty about the failure of this chip and the fact windows pulled the hot fix.
tlb bug part two.
Even linux users say after grub goes in Black screens.
Why isn't the industry coming clean on this issue.
Amd's 939 kicked Intel butt for 3 years- till they got it together,we need Amd ,but I do not like hidden issues and lack of disclosure.
Buyer beware!
AMDiamond - Monday, March 5, 2012 - link
Guys you are already upset because you spent your lunch money on Intel and even with higher this and that boards and memory AMD (even with half as much memory onboard [32GB] & Intel has [64GB] ) Intel is misquoting thier performance again...no matter what you say AMD= Dodge as to Intel=Cheverolet ..and when it gets down to AMD on the game versus Intel ...Intel has another hardcore asswhipping behind and ahead... its the same thing as a Dx4 processor(versus the pentium) even though Pentium had 1 comprehesion level higher ..when running the same programs DooM for example Pentium couldn't run DooM anywhere near as good as a simple DX4 amd..same stays true ...this Bulldozer has already broken unmatched records...AMD only lacks in 1 area..when you install windows the intel drivers already match at least 80 percent performance of Intel ...where AMD needs a specific narrow driver to run...once that driver is matched ..AMD =General Lee versus (Smokey & the) Bandits POS =Intel's comaro and its true ashamed that Intel even with 2x as much ddr3 memory ..cant even pickup the torch when AMD is smoking a Jet on the highway to hell for Intel -Hahahamauhahaha...sorry as intel qx9650 ahahahaahahahahahahahhahahahAMDiamond - Monday, March 5, 2012 - link
watch AMD take Diablo 3 (1 expansion by the next/it will be so ) Intel always lags hard on gaming compared to a weaker AMD class...point proven ...everest has alot of false benchmarks for Intel example NWN2 Phenom x3 8400 (triple core hasa bench 10880) yet a Intel Core 2 Duo e7500 has a bench of 12391 thats a 2.9ghtz cpu versus a 2.1ghtz CPU ..ok the kicker is intel is a dell amd is an aspire..DDR2 memory on the AMD and ddr3 memory on the intel ..all the intel bus features say higher (like they always do) but try running the same dammned video board on both systems then try running 132 NWN2 maps each medium size...no way the intel can do it ..the AMD can run the game editor and the maps at once..Intel is selling you a number AMD is selling you true frames per second..but your going to say oh but my Intel is a better core and this and that..ok now lets compare the price of the 2 systems...Intel was $2,500 the AMD was $400 ..why do you think that phenom just stomps the ass off that intel?(always has always will)zkeng - Wednesday, May 9, 2012 - link
I work as a building architect and use this CPU on my Linux workstation, in a Fractal Design define mini micro atx case, with 8GB ram and AMD radeon hd 6700 GPU.I usually have several applications running at the same time. Typically BricsCAD, a file manager, a web browser with a few tabs, Gimp image editor, music player, our business system and sometimes Virtualbox as well with a virtual machine.
I do allot of 3D projects and use Thea Render for photo rendering of building designs.
I use conky system monitor to watch the processor load and temperature.
These are my thoughts about the performance:
Runs cool and the noise level is low, because the processor can handle several applications without taking any stress at all.
Usually runs at only a few % average load for heavy business use (graphics and CAD in my case).
When working you get the feeling that this processor has good torque. Eight cores means most of the time every application can have at least one dedicated core and there is no lag even with lots of apps running. I think this will be a great advantage even if you use allot of older single core business applications.
The fact that this processor has rather high power consumption at full load is a factor to take into consideration if you put it under allot of constant load (and especially if you over clock).
For any use except really heavy duty CPU jobs (compiling software, photo rendering, video encoding) temporary load peaks will be taken care of in a few seconds, and you will typically see your processor working at only 1,4 GHz clock frequency. When idle the power consumption of this CPU is actually pretty low and temporary load peaks will make very little difference in total power consumption.
I sometimes photo render jobs for up to 32 hours and think of myself as a CPU demanding user, but still most of the time when my computer is running, it will be at idle frequency. I consider the idle power consumption to be by far the most important value of comparison between processors for 90% of all users. This is not considered in many benchmarks.
It is really nice to fire up Thea Render, use the power of all cores for interactive rendering mode while testing different materials on a design and then start an unbiased photo rendering and watch all eight cores flatten out with 100% load at 3,6 GHz.
Not only does this processor photo render slightly faster compared to my colleagues Intel Sandy Bridge. What is really nice is that i can run, lets say four renderings at the same time in the background, for a sun study, and then fire up BricsCAD to do drawing work while waiting. Trying to do this was a disaster with my last i5 processor. I forced me to do renderings during the night (out of business hours) or to borrow another work station during rendering jobs because my work station was locked up by more than one instance of the rendering application.
....................
To summarize, this is by far the best setup (CPU included) I have ever used on a work station. Affordable price, reasonably small case, low noise level, completely modular, i will be able to upgrade in the future without changing my am3+ mother board. The CPU is fast and offers superb multi tasking. This is the first processor I have ever used that also offers good multi tasking under heavy load (photo rendering + cad at the same time)
This is a superb CPU for any business user who likes to run several apps at the same time. It is also really fast with multi core optimized software.
AMD FX-8150 is my first AMD desktop processor and I like it just as much as I dislike their fusion APUs on the laptop market. Bulldozer has all the power where it is best needed, perfectly adopted to my work flow.
la'quv - Wednesday, August 29, 2012 - link
I don't know what it is with all this hype destroying amd's reputation. The bulldozer architecture is the best cpu design I have seen in years. I guess the underdog is not well respected. The bulldozer architecture has more pipelines and schedulers that the Core 2. The problem is code is compiled intel optimized not amd optimized. These benchmarks for a bunch of applications I don't use have no bearing on my choice to by a cpu, there are some benchmarks where an i5 will outperform and i7 so what valid comparison's are we making here. The bulldozer cpu's are dirt cheap and people expect them to be cheaper and don't require high clock speed ram and run on cheaper motherboards. AMD is expected to keep up with intel on the manufacturing process. Cutting corners and going down to 32nm then 22nm as quickly as possible does not produce stable chips. I have my kernel compiled AMD64 and it is not taxed by anything I am doing.brendandaly711 - Friday, September 6, 2013 - link
AMD still hasn't been able to pull out of the rut that INTEL left them in after the Sandy Bridge breakthrough. I am a (not so proud) owner of an FX-4100 in one of my pc's and an 8150 in the other. The 4100 compares to an ivy bridge i3 or a sandy bridge i5. I will give AMD partial credit, though, the 8150 performs at the ivy bridge's i5 level for almost identical prices.Nfarce - Sunday, September 20, 2020 - link
And here we are in 2020 some 9 years after this review and 7 years after your comment and AMD still hasn't been able to equal Intel as an equal gaming performance contender. AMD's only saving face is the fact that now higher resolution demands of 1440p and now 4K essentially make any modern game CPU bound and more dependent on the GPU power.BlueB - Wednesday, October 5, 2022 - link
I always come back to this review every few years just to have a good laugh looking back at this turd architecture, and especially at genius comments like:"You don't get the architecture"; "it's a server CPU"; "it's because Windows scheduler"; etc., etc.
No, it wasn't any of those things. The CPU's a turd. It was a turd then, it's a turd now, and it will be a turd no matter what. It wasn't more future-proof than either Sandy or Ivy, 2600Ks from 11 years ago still run circles around it in both single and multi-threaded apps, old and new. The class action lawsuit against AMD was the cherry on top.
It really never gets old to read through the golden comment section here and chuckle at all the visionary comments which tried to defend this absolute failure of an architecture. It's an excellent article, and together with its comment section will always have a special place in my heart.