Comments Locked

55 Comments

Back to Article

  • Ethaniel - Monday, September 12, 2011 - link

    ... 8 are actually 4, 6 are just 3, and 4 will be only 2? That's why the economy is so messed up...
  • Andrew.a.cunningham - Monday, September 12, 2011 - link

    Sort of. Without performance numbers (which we still know almost nothing about, though I expect AMD would have told us by now if the news was good), it's hard to say how one of AMD's Bulldozer cores compares to a more traditional dual-core CPU.
  • silverblue - Monday, September 12, 2011 - link

    That's just it though. John Freuhe, for all his protestations about being server inclined, has been very coy about that particular area as well, apparently all in the name of "good business sense".

    The longer this goes on, the less optimistic we're all going to get. I am expecting one thing though - there's very little chance of a Bulldozer 4-core, 2-module CPU matching a Sandy Bridge 4-core, 4-thread product clock for clock. Anand's own article seems to suggest that each integer core will be similar to a Stars core in performance and that's the one area that Phenom really lagged behind the competition. Bulldozer's effectiveness depends on that second core per module; I suppose it was either that or make a really beefy core that might be less efficient.
  • Belard - Monday, September 12, 2011 - link

    The performance and price will make or break the FX CPUs. It doesn't matter much if the cores don't match up with SB. Consider the i5 CPU that is dual core will out run a quad core AMD and core2 CPU. Or and old 2ghz Athlon will wipe the floor of a P4 at 3.6ghz. As long as the FX is competitive is what counts. For most people the A3600 is more than enough.

    It won't be until next year that AMD unifies the socket to FM2. So today's mobos won't handle the next Gen FX or A series chips. But its nice that AM3+ boards have been selling for a while.

    I'm waiting to upgrade my Q6600 to FX ... but it needs to be good.
  • BSMonitor - Tuesday, September 13, 2011 - link

    "Or and old 2ghz Athlon will wipe the floor of a P4 at 3.6ghz."

    This is how AMD fanboys and fanatics survive. They live off the mythology that the Athlon XPs wiped the floor with the Pentium 4. The original Athlon outperformed P3's.. The ONLY thing that was true from that time is that Athlon 64s and X2's with their integrated memory controller were 20-40% faster than the comparable P4's and Pentium D's.. That's it. That is the ONLY time AMD processors dominated Intel processors. A matter of about 1 1/2 - 2 years.. That's it!

    P3's and Athlons were trading blows performance wise, until you included SSE optimized code. XP's and P4's traded blows until Intel started ramping the clock speed.

    It is simply not true that AMD ever really wiped the floor with anything but wine and cheese. "Hey, but look at us"
  • Wolfpup - Tuesday, September 13, 2011 - link

    Yeah...hmm.... the first Athlons were the first chips that were neck and neck with Intel, right? But not BETTER, if I recall.

    The Athlon XP was similar-competitive, but not better, I don't think.

    Athlon 64 was largely better than the Pentium 4s, but not for the whole time, am I remembering that right? Like when Northwood (2nd gen P4) was introduced, I recall it being better than the Athlon 64...or was it the XP it was competing with? Well, anyway, I recall it being better than AMD's hardware for a while, but then AMD's next chip (maybe that was Athlon 64) was better again, and surprisingly, the third gen Pentium 4 was pretty much a total bust in terms of better performance. 3x the transistors, a smaller die process...and they performed pretty much the same as they Northwood chips they replaced. I know AMD was largely ahead for a couple years then, although it was kind of like it is now in reverse-AMD was better, but not to the point where Intel was a joke (ie. right now Intel was better, but AMD's no joke...Phenom II may compete clock for clock with a Penryn, but that's not unimpressive).

    So...yeah, AMD was better for a while with the P4, then worse, then better again, then with Conroe/Core 2 I think Intel's had a generation or two lead ever since, with AMD always still impressive in the grand scheme of things.
  • silverblue - Tuesday, September 13, 2011 - link

    Slot A Athlons were equal to P3. Socket A (full speed cache) gave AMD a slight lead. The other enhancements from that point onwards - FSB going to 133 with Thunderbird, 166 with Thoroughbred and 200 with Barton, plus the addition of SSE support and then Barton's 512KB cache - kept the Athlin family ahead per clock in most tasks, occasionally comfortably so (P4 was initially slower than P3 but benefitted from increases in FSB and cache significantly as well as the design allowing for high clocks to help offset the lower IPC). P4's main strength seemed to be in encoding. XP had very limited headroom at the top end meaning Athlon64 had to replace it in order for AMD to stay competitive.

    A64 initially started gently due to S754 only being single-channel, but it was still the strongest CPU out there. P4 did momentarily regain the lead with the D variant (dual core) but as soon as the X2 appeared, AMD were back on top again, at least until Core.

    I've not mentioned Prescott nor HyperThreading. Prescott was rather ill-conceived (Northwood was better in my honest opinion... Preshot indeed), and HT, whilst having the capability to speed up some workloads noticeably, could actually degrade performance (off the top of my head, didn't they add in some sort of replay cache to help combat the huge pipeline stalls they were getting?). HT is far more mature now.

    There's one point worth making; AMD did charge a lot for its upper models. Yes, they did outperform Intel's offerings and yes, they weren't P4EE "expensive", but they certainly weren't something I'd fork out for. If anything, the emergence of Core and particularly Core 2 actually helped to force CPU prices down in the main.

    So, to recap, for the majority of a 5-year period, AMD could quite easily boast the title of best IPC, but Intel stayed competitive (certain practices aside) thanks to much higher clock speeds. It was certainly common to see P4s heading a benchmark chart with or without HT, albeit having to use much higher clock speeds to do so. A64 was heavily based on K7 which was good but never a high clocker, so I suppose it's good for AMD that Intel never did carry out its threat of producing a 10GHz P4, aside of the terrifying amount of juice that might require or the cooling involved.

    We should thank Intel for something else - they have been behind the adoption of DDR2 and 3. AMD always lagged in these areas though platform prices would've been more expensive for little gain in AMD's case.
  • silverblue - Tuesday, September 13, 2011 - link

    Athlon, sorry (2nd sentence).

    Only the third time I've tried to submit this correction...
  • Wolfpup - Tuesday, September 13, 2011 - link

    I'm a bit worried, since weren't these supposed to launch like a year ago?

    Still...if they can be even a generation or so behind Intel, they're not too bad off, and technically for all we know they're better (though since they haven't said anything yet...)

    If nothing else, AMD isn't wasting hundreds of millions of transistors on worthless video like Intel now does, which should give them an advantage.

    Regarding the core thing...well...I don't know how I feel about it. I mean basically it's like every "2" cores is pretty much 2 cores worth of integer hardware, with 2 floating point units, like you might find on a single core...so almost like 1.5 high end cores for every 2 "cores".

    Just remains to be seen how they compare clock for clock with Sandy and Ivy bridge, because this high level stuff doesn't tell us anything, and is fine.
  • flyck - Monday, September 12, 2011 - link

    Please explain to me and this goes for all the other posters with the same remark:

    What is the minimum performance that is needed to be called a core?

    Is that Sandy bridge? is that nehalem.? is that K8? is that bobcat or is that Atom??

    Can you see the error you are making? You core is a core no matter how it performs. It will always be called core. A bulldozer module will be a dual core with some shared resources. Just like any dual core and upwards was when they shared resources also...

    Just a remark: every core in a pc shares at least the main memory, the gpu and about every I/O.... so?
    in last years they also shared a cache levels.
    and now we get a cpu that shares cache, front end and fpu.

    But again: performance doesn't dictate a core!
  • icrf - Monday, September 12, 2011 - link

    Let me simplify it for you:

    If you're running scalar integer code, a module is equal to two cores.

    If you're running floating point or vector(SIMD) code, a module is equal to one core.

    You can't just say "cores" without specifying what workload, as that is a major determining factor with Bulldozer.

    And even then, the power of a "core" is variable. A single integer core in Bulldozer is likely not as powerful as a single core in Sandy Bridge, but they're likely selling you more the money. A single float/vector core in Bulldozer has a better chance at being comparable with Sandy Bridge.
  • Alexvrb - Monday, September 12, 2011 - link

    Actually the bit about only equalling one core for floating point/SIMD is not entirely true. Look at the diagram again. Do some reading. AMD calls it "Flex FP", there's two 128-bit FMACs for a reason. The only time they work as a single 256-bit unit is for AVX, IIRC. For the vast majority of FP instructions, there's two 128-bit units per module.

    This isn't really a bad approach. In fact it's nearly as good as completely duplicating all the hardware, but it saves them a good bit of die space which they can use towards more cores. So yes, I wouldn't try to compare AMD and Intel on a core-by-core basis, any more than you would compare them on a clock-per-clock basis. You're better off comparing them on price/performance. If this pricing is true, these chips are very reasonable and will probably carry a decent punch. I will certainly consider them for a gaming rig.

    Also, a modern OS thread scheduler is HT aware, so I have no reason to believe a future update (Windows 8 for example) couldn't schedule to Bulldozer in a similar fashion to help maximize usage of the cores.
  • Angels77 - Tuesday, September 13, 2011 - link

    can you simplify it for a pc dummie

    how well (or not )will these modules / cores handle chess ?
  • silverblue - Tuesday, September 13, 2011 - link

    Haha... the Fritz benches will be coming thick and fast.
  • Wolfpup - Tuesday, September 13, 2011 - link

    Performance isn't the issue...I mean dual core ARM 11 is whimpy-like two 486s kind of, but it's clear cut dual core.

    This though has 2 cores worth of integer hardware for every one core of floating point hardware, basically. It's very different from anything before it in that regard.

    I mean yeah, Intel's current stuff shares cache, but that's not at all the same thing-it wouldn't even have to, it's just more efficient because the same data may get used by more than one core.

    That's not to say this isn't going to be a good design...we just don't know. Really we need to know how they perform on a clock for clock basis versus Sandy and Ivy bridge. If it's all as fast, then a "8 core" Bulldozer will be twice as fast as a quad core i7 at integer, and the same speed at floating point...basically. But we just don't know how it compares.
  • yankeeDDL - Wednesday, September 14, 2011 - link

    You're perfectly right, but still, 8 cores for $266 seems a sweet deal.

    Looking at the benchmarks leaked on March (which, of course, could be totally fake), it seems that 3.5GHz of Bulldozer are au-pair with 4GHz SB.
    This means that the FX-8150 should trash the i7-2600 which currently costs $299. Let's hope the leaked benchmarks are true ...
  • 789e2d - Monday, September 12, 2011 - link

    They should tell us how many bulldozer units are in there.
    1 bulldozer unit = 2 cores.
    Also it's nice to see, that AMD is back on the road in the high-end sector.
    Was about time.
  • BSMonitor - Tuesday, September 13, 2011 - link

    at $250, they must not have too much faith
  • PlugPulled - Monday, September 12, 2011 - link

    lol
  • sangyup81 - Monday, September 12, 2011 - link

    there will be 8 integer cores but only 4 FPUs though these will be actually be 256-bit FPUs.... AMD is gambling that most people will not need so much FPU
  • DanNeely - Monday, September 12, 2011 - link

    The FPU in previous AMD units was only 128bits wide; so each bulldozer unit has (before architectural improvements) roughly the same FPU performance as two previous generation cores except that when only one core is doing floating point math it can use all of it.
  • red_dog007 - Monday, September 12, 2011 - link

    Plus long term, AMD is banking on the idea that more and more work will be offloaded onto the GPU. Thus the move to APUs and trashing their VLIW architecture in favor of a GPGPU design.
  • stunnery - Monday, September 12, 2011 - link

    Bulldozer is a lot more cheaper compared to Intel's i7 2600K (~$330).
    I have a feeling that it is not going to beat the Sandy Bridge CPUs in terms of performance.

    ** Disclaimer
    I am not a fanboy of Intel or AMD.
    I am just merely speculating based on the listing price.
  • Skott - Monday, September 12, 2011 - link

    I would love it if BD could take the performance crown too but I seriously doubt it can even match SB. If it did AMD wouldn't keep pushing things back and keeping it all hush hush I'm thinking. AMD will continue with a cheaper product that can do the job somewhat good enough.
  • danjw - Monday, September 12, 2011 - link

    Maybe not. You need to remember that AMD has been behind Intel for a long time on performance. They need to dig themselves out of a perception hole. I would really like to see AMD show some real competition to Intel. We will have to wait and see what the performance numbers look like, before we assume anything. I do have to admit that since they aren't already bragging, that is does look a bit unlikely.
  • XZerg - Monday, September 12, 2011 - link

    If so then AMD is launching another dud compared to SNB and Ivy and it is in serious trouble going forward. And so will the end users who were hoping on some competition.
  • JarredWalton - Monday, September 12, 2011 - link

    Unfortunately, all signs point to Zambezi being less than stellar; I suspect that clock for clock, a single BD core will be slower than current K10.5 stuff, but you'll get more cores. It will also be interesting to see how Turbo Core plays out; if it's as cautious as some of the other chips, the eight core chips will only run at the base (or base + 1) frequency for anything more than dual-core workloads.
  • Z Throckmorton - Monday, September 12, 2011 - link

    "Less than stellar" still leaves a lot of room for success. Outside of a few niche tasks/use scenarios, AMD has nothing that can compete with the i5-2500K, and has nothing at all that can compete with the i7-2600K. Unless AMD mindlessly priced these chips, they now have the FX-8120 aimed right at the i5-2500K and the FX-8150 aimed a bit below the i7-2600K.

    I don't really care how many cores it takes for AMD to rival Intel. I care about performance, price, and power usage - period. All signs point to Bulldozer being competitive with Sandy Bridge and failing only to exceed the current top-end SB SKU. I do fear that BD will be far less power efficient than SB, though this is not always much of a concern at the high-end.

    Again, *IF* performance matches these prices, AMD has regained competitiveness at the mid-high and (arguably) high-end consumer segments. This is great news for all enthusiasts.
  • silverblue - Monday, September 12, 2011 - link

    We still have to play the waiting game. Not sure about the power efficiency argument, but the A8 isn't a bad example of AMD finally closing the gap somewhat in this area.

    Those prices will only truly be put into perspective once there's benchmarks out, rather than comparing them to the price of a 2600K. I'd love to believe AMD were taking their time to build up a huge stockpile but with the rumours about new client steppings, I'm less inclined to think that.
  • silverblue - Monday, September 12, 2011 - link

    John Freuhe has a remark about that in the comments here...

    http://blogs.amd.com/work/2011/05/16/stop-the-cloc...
  • JarredWalton - Monday, September 12, 2011 - link

    It's an interesting post that tells essentially nothing about performance, unfortunately. BD will be a lot better in terms of power, but Turbo Core still seems pretty mediocre. There are two TC states: all cores active, or half (or less) active, as well as the stock non-Turbo state. We can see that the half-active state will allow up to 4.2GHz on the top Zambezi CPU, and 3.6GHz at the non-Turbo, but will the other TC state be 3.8GHz, 3.9GHz, or 4.0GHz?

    I guess that really doesn't matter too much for the 8150, but the 8120 has a 900MHz TC range, so it might matter more. 3.1GHz stock, 4.0GHz max TC, and then somewhere in between (likely 3.3 to 3.6GHz, but *where* is the question) is the TC value for all cores being active. All this assuming the chip isn't running too hot or using too much power, I guess.

    But in regards to performance, we have two integer cores with one FP core as a module, and as I stated above I expect the individual INT cores will be slower per clock than Phenom II. I might be wrong, but if not then we're looking at lower single-threaded and lightly-threaded workloads compared to Intel. For heavily-threaded workloads (e.g. not games, not casual use, and not Office work), BD might be more competitive, but we'll still need to see final performance and power figures.

    In looking at the X6 1100T vs. i7-2600K (http://www.anandtech.com/bench/Product/203?vs=287)... idle power is similar, but load power is 23% higher on Istanbul than on Sandy Bridge, all while providing less performance in every single benchmark. So Bulldozer would need to improve performance by around 20-30% while cutting load power by 20%, just to match the i7-2600K.

    AMD might still pull this off, but considering the lack of benchmark information I remain skeptical. (We saw running K8 long before launch way back in the day -- http://www.anandtech.com/show/883 -- and I seem to recall benches getting leaked at least several months before launch on some sites.) Just my feelings on the subject right now, as someone that hasn't seen any actual real data on BD performance -- leaked or otherwise.
  • silverblue - Monday, September 12, 2011 - link

    Completely in agreement. All we've really had since Bulldozer was announced has been talk about specs as well as a huge amount of speculation. You could argue that back with the K8 previews, AMD KNEW they would be the performance leaders, and that now, they'd be rather foolish to repeat that with a dominant Intel unless they had something exceptional to showcase. I also find it quite odd how they tried to use Llano to promote AMD as the better choice when this might only be the case under heavy load situations for both the CPU and GPU. It's as if they lack the confidence to tell people that perhaps Bulldozer isn't all they wanted it to be. All this stuff about releasing benchmarks etc. on a non-predetermined date is amusing yet quite infuriating and worrying all at the same time. JF may know his stuff, and he may be eager to make remarks about where Intel might be going wrong, but at least they have a proven product line. Will Bulldozer make or break them, is it that big a deal, or are they worried after making out Barcelona to be something it wasn't?
  • Iketh - Monday, September 12, 2011 - link

    "Yes, we generally disclose launch date on the launch date."

    freakin hillarious chatter throughout all the posts as he adheres to NDA
  • Beenthere - Monday, September 12, 2011 - link

    If these are true street prices there are going to be a LOT of happy campers.
  • Tchamber - Monday, September 12, 2011 - link

    Last year i built my first i7 920 system for myself and a phenom II for my brother. He had a nice 4890 and i had the gtx 260. For all intents and purposes, our comouters were identical in gaming performance. Granted, we dont play lots of games, but in l4d and l4d2, wings of liberty and a couple others, there was no difference. Now i have the i7 970 and the 285, and i reign supreme, but that seems to be a function of cores. Dollar for dollar, his is a smarter buy, while my initial system was easily 1.5x the cost of his. I think AMD is smart to give 8 cores for so little, i read long ago that is the future of cpus because of the thermal limits of silicon. More=better, though 1 AMD core is not equivelant to 1 Intel core.
  • Aikouka - Monday, September 12, 2011 - link

    My assumption would be that you performed about equal because you had a better CPU, but he had a better GPU. Unfortunately, Anandtech has never benchmarked the X4 890, but the 810 and 910 both perform worse than the i7 920.

    However, there is a bench for the 4890 vs the 260:
    http://www.anandtech.com/bench/Product/175?vs=170#

    You most likely lost any lead the CPU gave you, because the GPU falls behind in every game.

    Change that to the 285 vs 4890, and you win almost every benchmark on GPU alone:
    http://www.anandtech.com/bench/Product/175?vs=166

    Also, you would have had a lot better price parity if you went with a Socket 1156 board rather than the more expensive 1366. Given you weren't using SLI and probably weren't overclocking, I don't know why you'd waste your money... unless 1156 wasn't out yet.
  • werfu - Monday, September 12, 2011 - link

    I guest AMD is really playing its survival on this one and ought to have made this one the best bang for buck availlable on the market. Looks like that it has finaly integrated some design feature Intel integrated in the first Core series. What's bugging me however is the high speed of these chips. Being an architectural refresh, the next iteration is usually a die shrink. However I wonder how AMD will be able to crank up the GHZ, even at 20-22nm. Intel had to design its 3d transistors. I've got no idea how AMD will keep up with this, else than asking IBM again.
  • Beenthere - Monday, September 12, 2011 - link

    Don't worry about AMD as they have quietly been working behind the seens to bring a lot of technical improvements to the X86 architecture for consumers. The one that should be worrying is Intel who will need to cut their prices and margins to compete while AMD just keeps rolling out better and better CPUs based on the Bulldozer infrastructure.
  • silverblue - Monday, September 12, 2011 - link

    I doubt it. Intel has substantial fabrication capabilities and Sandy Bridge is reasonably priced, so they'd happily pick up any slack once AMD runs out of Bulldozers to sell (assuming they do sell that well). We're not talking a return to the P4-K8 days when Intel were hopelessly outclassed, though already people are seeing Bulldozer's high clock speeds plus issue with yields as an indicator that the reverse might be the case this time around.
  • Beenthere - Monday, September 12, 2011 - link

    AMD blew out 12 Million Llano APUs in 3 months with ZERO advertising and the PC laptop builders can't get enough of them. AMD has a bright future ahead of them with Bulldozer based CPUs including the Trinity APU due in '12 and the Opteronn 6200/4200 series chips shipping now which they can't produce fast enough to meet demand either. Life is going to become much better for consumers and much harder for Intel.
  • silverblue - Tuesday, September 13, 2011 - link

    12 million is a great figure, I'm certainly not going to deny that, and there is some logic in delaying Bulldozer to maximise how much Llano they can get out the door (more money for AMD, after all), but even without spinning off their fabs, they face the same situation as with, say, K8 - huge demand and not enough capacity to fulfil it. It's a shame.

    Even so, I'll say it again - in terms of size as compared to Intel, AMD is more than punching its weight if you look in terms of desktop market share as well as all the design wins and product listings of their APUs. Getting the 6200 and 4200 products out before Zambezi was probably a smart thing to do - they need to address the big money spending markets, after all, and Zambezi isn't it, so whilst they make money, they drive us mad in the process. ;)
  • insurgent - Monday, September 12, 2011 - link

    I hope it's the same as when the ATI 4800 series came out, cheaper but very competitive albeit power hungry.
  • rnssr71 - Monday, September 12, 2011 - link

    it has already been stated by AMD that BD will beat k10 and k10.5 core to core, clock to clock. the question is 'by how much?'.
    also, one module, two cores will have 80+ percent more performance than a single core. a true duel core has 90+ percent more performance depending on the bottlenecks and how well the code is optimized for more than one core. so, really, AMD has done well with this and will most likely improve on it in the future.
    my guess is that BD, core to core, clock to clock, will be around nehalem/westmere performance wise with lower power consumption.....maybe a LITTLE higher. thus, higher clock speeds at launch and lower prices that we're seeing to compete with SB.
  • RussianSensation - Tuesday, September 13, 2011 - link

    "will be around nehalem/westmere performance wise with lower power consumption.....maybe a LITTLE higher."

    0 chance this will happen, literally 0. There is no way AMD is going to sell a $266 FX-8150 processor clocked at 3.6ghz with IPC of Nehalem. Such a CPU would be 2x faster than 2500k in multi-threaded apps, and even beat the $999 Core i7-990X. Considering HT only adds about 15-25% performance increase (best case) and about 10% on average, a 6C/12T 990X would have no chance at all against an 8 Core Nehalem 3.6ghz processor.

    If AMD had such an amazing CPU lined up, they wouldn't have delayed it for 9 months, or done 4-5 re-spins to get its clock speeds higher.

    On top of that, when AMD had excellent CPUs (A64, X2, FX), they never priced them below Intel's. FX8150 may beat 2500k in multi-threaded apps, but it won't stand a chance in 1-4 threaded apps. But it sounds like AMD focused more on servers and markets that need 8 cores.
  • neotiger - Monday, September 12, 2011 - link

    They've begun shipping already but STILL no benchmarks?

    This can only mean 1 thing: the performance sucks big time. If it were halfway decent benchmark data would've been "leaked" long ago.
  • Iketh - Tuesday, September 13, 2011 - link

    I think what everyone is forgetting is BD is 125w without a GPU and SnB is 95w WITH a GPU... so i have a feeling performance might actually reach SnB, but performance:watt ratio is gonna blow, thus the lower pricing
  • silverblue - Tuesday, September 13, 2011 - link

    TDP isn't really an accurate indicator here unless Bulldozer routinely pushes cores up in speed as close as it can get to the stated limit. The power consumption graphs are just as important as the performance data, really.
  • Iketh - Wednesday, September 14, 2011 - link

    well of course, but with such a large difference in manufacturer-stated power envelops, it's safe to make that assumption

    and really what i was implying is the cores of SnB always run below 95w and leaves the room available for the GPU, which is not the case thanks to turbo
  • Beenthere - Tuesday, September 13, 2011 - link

    Zambezi will also be available in 95W versions too just like the Phenom II series so not to fret over power consumption. Trinity which is a Bulldozer core based APU, will be as low as 17.5W.
  • Angels77 - Tuesday, September 13, 2011 - link

    Anyone ?
  • BhargavaRam - Tuesday, September 13, 2011 - link

    Hi Anand, Andrew,

    Been Eagerly waiting for a review of the chip, When will it be published??
  • BSMonitor - Tuesday, September 13, 2011 - link

    3.6GHz and turbo 4.2GHz.. Yeah, lack of competition has definitely put the 4GHZ mark on the back burner. I believe Anand even had 2600K's running 4.5GHz on air.. It won't be long. Ivy bridge will definitely have a 4 GHz part.
  • mackintire - Tuesday, September 13, 2011 - link

    Interesting posts here. Most all these questions have been discussed before. Generally it is the assumption that Bulldozer will be close (possibly ever so slightly below) the performance of the first gen i7 processors. In some heavily threaded application a 8 core bulldozer will beat a 4 core hyper threaded first gen i7. Since most applications are not heavily threaded, this makes bulldozer a mixed bag on the desktop. Keep in mind this is a Server chip that is being offered on the desktop. The Server market is where AMD intends to make a pile of cash. If they can sell more bulldozers by marketing them at a great price point to normal consumers..... Great.

    In short,
    Expect AMD to be competitive with all the i5's and the bottom of the second gen i7's.

    Later,

    Mackintire
  • rickcain2320 - Wednesday, September 14, 2011 - link

    AMD is cheaper. The only reason I even have a Q6600 right now is a friend of mine sold me the chip for less than the cost of a meal at chinese restaurant. I just don't like intel price/performance.
    The Bulldozer looks great to me.
  • neilrued - Tuesday, September 20, 2011 - link

    Hello rickcain2320,

    I agree, AMD's price performance ratio has always been better than Intel's. For the CPU cores aimed at the high to extreme performance market segments, Intel CPUs are too expensive compared with AMD's CPUs, for intelligent average income people who want to build their own PC.

    Even though for floating point intensive applications, the Intel CPUs' have always had the lead over AMD, for gamers these days, the gaming performance of a PC relies much more on the graphics card performance, than CPU floating point performance.

    For business owners doing their accounting, it doesn't matter if they are using a notebook with a dual core AMD, or Intel CPU. The accounting software doesn't require Gigaflops to run.

    Unless an user is planning on modelling the complex atomic level chemical interaction of carbon based polymers, to reduce the side effects of pharmaceutical drugs, model the evolution of the Universe by deriving detailed galactic models to better understand the role of Dark Matter, model the deuterium/tritium plasma behavior applying Chaos Theory inside an experimental thermonuclear fusion reactor for understanding how to improve the fusion time; there seems to be not much point in discussing the benefits of floating point performance.

    The earlier reviews between the Intel and AMD Llano APUs, demonstrated how a superior graphics gaming performance depends much more on GPU rather that CPU floating point performance. More recently the newer AMD Trinity APU showed a superior performance over Intel's APU due to the absence of stutter.

    I am hoping that with AMD's new Bulldozer CPU core, they have finally managed to improve the floating point performance, to at least match if not beat Intel benchmark for benchmark, and sell the higher performance market segment CPUs for less than Intel.

    My other hope is that in 2012 AMD releases an APU with a dual graphics processing core, an 8 core CPU running at 5.2 GHz without overclocking, and with the CPU cores having floating point performance matching Intel's. The dual graphics processing core should demonstrate a significant gaming performance improvement over the current Trinity graphics processing core.

    I am neither an AMD or Intel "fanboy", my choice in CPU is driven by cost to performance ratios. What happens is everytime I purchase a laptop, I have twice bought a laptop with Intel processors. When I build a desktop system, I have built six systems, two for a former employer, and I always selected AMD CPUs, and I have never had any issues with performance.

    I recall when I built the two systems for one of my former employers, they were typical corporate thinkers; use only Intel. This somehow translated into a distrust of AMD processor products in the minds of some of the staff, including my former Supervisor, which was odd because he is an Electronics Engineer like me. They thought that since AMD was cheaper than Intel, that meant lower quality or inferior product.

    The company was slow in upgrading their systems, and the executives would get their desktops upgraded, the managers would then get the best executive hand-me-downs, then the supervisors would get the managers and executive hand-me-downs. I used to work in the Engineering Department, and I got the bottom of the barrel obsolete Intel motherboards. I was required to build a mini server to log massive amounts of experimental data from different systems. We tried using an older Intel system, but it was slow in handling the lab data backups, and would miss some data.

    At that time, I coincidentally happened to upgrade two home PCs, and had a spare 900MHz and 1.8GHz Athlon CPUs with their motherboards. I spoke with the company's IT specialist contractor, and he did not mind me building the AMD based PCs, installing the OS, and installing the applications software, provided I took the responsibility if there were any problems with the two AMD based systems.

    I did a swap with the IT specialist contractor for spare laptop parts, and he told me he could only give me parts that he couldn't guarantee would be working properly. The only part from this junk pile I could get to work was a DVD writer, which was fine because I was able to upgrade my first laptop's CD writer.

    The faster 1.8GHz AMD system was used as our lab server, and the slower 900MHz system was used as a data logging PC for old manufacturing equipment rescued from storage; its original PC was reassigned to an executive's desk years before.

    The 1.8GHz AMD system was performing so well, my Supervisor was impressed; he also used the server daily to look at the results for experiments he was running. His jaw dropped when I told him our new server was using an AMD CPU. For several weeks, every day he double checked the data, and compared the data logged by some Intel PCs with the AMD server, and each time noticed there were no discrepancies, no missing data. Once he concluded he could trust the new server, he stopped double checking, although at the time he didn't say anything to me. A week or so later the IT specialist contractor advised me that a comparably fast Intel based system was available, if we wanted to swap it for the mini lab server. I suggested to my Supervisor I could remove the new mini lab server, and his response was that there was no good reason to replace it. He even seemed to be defensive when I teased him that I could stay back and replace it with the Intel based one. He was very happy with the performance of the new mini lab file server because it made him look good in the eyes of management, and didn't want even me messing with it. What gave him such confidence in the mini lab server, was after he stopped double checking on it, he'd ask me to progressively add additional lab data from each PC data logger, and for me to double check there was no missing data. Eventually all of the lab PCs were backing up their data to the mini server, and the mini server never missed any data, no matter how much data it had to collect.

Log in

Don't have an account? Sign up now