Comments Locked

76 Comments

Back to Article

  • SuperiorSpecimen - Tuesday, August 24, 2010 - link

    Let's see some competition outside of the price game!
  • mrmojo1 - Tuesday, August 24, 2010 - link

    Awesome article, can't wait to see their release :) Should be very interesting!
  • crawmm - Tuesday, August 24, 2010 - link

    I drooled on my laptop reading this. Thank you, Anand. Good overview. And fun reading after a day of tedious (and mindless) work.
  • lothar98 - Tuesday, August 24, 2010 - link

    "In many ways the architecture looks to be on-par with what Intel has done with Nehalem/Westmere."

    I truly hope that this does not end up to be how things roll out. It has been far too long since we have seen good competition throughout the range of consumer CPU lineup. Currently we have options and competition in the mid-low end giving us exceptional bang for our buck. While one would never say you can get the best bang for your buck in the mid or high end everyone can still appreciate having options as well as getting value.
  • Freddo - Tuesday, August 24, 2010 - link

    Bobcat seems very interesting to me, I hope it won't take long until we see a good netbook with it, with good build quality (metal, no plastic toy), a HDMI port and 2GB RAM.
  • Mike1111 - Tuesday, August 24, 2010 - link

    I'm wondering: what about AMD powered notebooks? And I don't mean netbooks or CULV notebooks. Looks like bulldozer won't come to notebooks until 2012, which would mean that AMD would most likely have to compete with Intel's 22nm Sandy Bridge successor, Ivy Bridge.
  • Penti - Tuesday, August 24, 2010 - link

    Llano APU, it's briefly mentioned. It's where we're at. Basically K10-based 4-core with integrated DX11 GPU. Better then today but not much of a competition.
  • mino - Tuesday, August 24, 2010 - link

    The GPU in the is supposed to be at least 5x the speed of current IGP performance.

    Basically you get a "discrete" GPU for a price of IGP ...
  • MonkeyPaw - Tuesday, August 24, 2010 - link

    I can see Bobcat scaling upward in notebooks. It's multi-core capable, and is a fully-functional CPU. A quad core Bobcat with better-than-Intel graphics should be a very fulfilling product for notebooks in the mid-range, while providing good battery life (thank you, power gating). Anything above that could be handled by low-voltage Bulldozers as a premium offering. To me, that seems like a better solution than Intel's, where the Atom to Core increase is so severe.
  • Kiijibari - Tuesday, August 24, 2010 - link

    Ehh guys ...

    MMX is depracated in 64bit mode together with x87 and 3Dnow!:

    --------
    The x87, MMX, and 3DNow! instruction sets are deprecated in 64-bit modes. The instructions sets are still present for backward compatibility for 32-bit mode; however, to avoid compatibility issues in the future, their use in current and future projects is discouraged.
    --------
    http://msdn.microsoft.com/en-us/library/ee418798%2...

    Why on Earth should AMD build in 2 special MMX pipes in a brand new µarchitecture ?

    AMD just announced that they got rid of 3Dnow!, MMX pipes make no sense at all.

    You probably mean XOP, dont you ?
  • mino - Tuesday, August 24, 2010 - link

    From the HW design POW, those pipes are "MMX/3Dnow" class stuff.
    They run SSE3, but they are still MMX-class.

    There is a reason Bulldozer has "FMAC" written there ...
  • Kiijibari - Tuesday, August 24, 2010 - link

    ... it is stupid to name a circuit after a deprecated ISA extension and not after its function.
    If its doing stuff like 3dnow and mmx then call it Shuffel / permutation pipeline but not MMX ...

    The FMAC is the best example .. why is it written FMAC in that case and not SSE5/AVX/XOP ?
  • KonradK - Thursday, August 26, 2010 - link

    Depracated does not mean prohibited. Also there are existing MMX programs and other than Windows 64bit operating systems and compilers other than MSVSC.

    MMX and x87 is prohibited in 64bit kernel code.

    http://msdn.microsoft.com/en-us/library/ff545910%2...
  • iwod - Tuesday, August 24, 2010 - link

    From the design of Bulldozer's FPU it is cleared that AMD want Multi Threaded FPU to run on OpenCL. While the dual Integer looks interesting now. It is up against the SandyBridge, the architecture that is suppose to leap again like Pentium 4 to C2D. And if Bulldozer comes any later, it will be up against the die shrink of SandyBridge, Ivy Bridge. Things dont look so good in here.

    It is mainstream / low end that looks very interesting. I am currently using a Pentium M 1.8Ghz Dothan with 2GB DDR Ram. With a Radeon 1600 Graphics. I dont get hardware acceleration from GPU, 720P is just barely playable with some very fast software decoder. It is fast enough to watch some 460p youtube and most of my day web serving.

    Now if Bobcat have similar or higher IPC then Dothan. A Quad Core Bobcat with Radeon 5000 64 SP will still be within reasonable die size on 40nm, It will be cheap when it drops to 32nm or lower. Most of us dont need SUPER FAST computer. And Bobcat with Radeon 5 Series or Higher Plus a Fast SSD are all we need.
  • aegisofrime - Tuesday, August 24, 2010 - link

    I don't recall Sandy Bridge being a revolutionary leap. Everyone has been saying that it's more of evolutionary, the main difference being the addition of AVX.

    I REALLY REALLY REALLY hope that AMD announces later today what socket Bulldozer will be on... I desperately need more video encoding performance. I have a AM2+ motherboard and that bloody 1055T is singing it's siren song to me every night. If Bulldozer is on AM3 I can get an AM3 board and the 1055T and do a quick upgrade to Bulldozer.

    Come on AMD. Your customers need more information to make an informed decision!
  • mino - Tuesday, August 24, 2010 - link

    Buldozer gen1 == primarily servers
    => 16/12-core (MCM) Socket G34 (current platfrom)
    => 8/6/4-core Socket G32 (current platfrom)

    Bulldozer Desktop (hopefully before X-mas 2011)
    => 8?/6/4-core Socket AM3R2(or AM3+, whatever they call it)
  • Pirks - Tuesday, August 24, 2010 - link

    Huh? You want more video encoding perfomance and you think about upgrading CPU? What kind of idiocy is that? Use 480GTX with Badaboom and your video encoding speed won't be matched by CPUs of year 2020 or maybe even 2030 :P
  • aegisofrime - Tuesday, August 24, 2010 - link

    Don't talk if you don't know what you are talking about. No GPU encoder out there is able to match x264 quality or SPEED wise. And the huge flaw in your statement is that Badaboom doesn't even support Fermi GPUs right now.

    Have you done any serious video encoding before, or are you just trolling as usual?
  • ChronoReverse - Tuesday, August 24, 2010 - link

    Indeed. I would try out CUDA encoders every once in a while in hopes that I could at least get the quality of x264 at MINIMUM quality but they can't even match that.

    Since x264 at minimum quality encodes slightly quicker (on my quad core) a CUDA encoder does (on my GTX260) and still yields better quality, I really appreciate faster CPU's.
  • mapesdhs - Tuesday, August 24, 2010 - link


    Hate to say it but unless GPU acceleration is available, the i7 is a far better
    choice for video encoding. I still use a 6000+ for most tasks, but numerous
    article reviews made it quite clear that AMD was not the best choice for
    video encoding, so I went with an i7 860 4GHz. Pricing was surprisingly good,
    speed is excellent.

    Ian.
  • Dustin Sklavos - Tuesday, August 24, 2010 - link

    If you're encoding using Adobe software, ditch AMD until Bulldozer. Adobe's software makes heavy use of SSE 4.1 instructions, which current AMD chips lack, and the extra two cores don't pick up the slack compared to a fast i7.
  • flyck - Tuesday, August 24, 2010 - link

    From the design of Bulldozer's FPU it is cleared that AMD want Multi Threaded FPU to run on OpenCL.

    Not sure what you mean with that? (it is true they want to abuse that in the future with fusion) but at this moment i see: Sandybridge 2hreads -> one FPU, Bulldozer 2 threads -> one FPU
  • BitJunkie - Tuesday, August 24, 2010 - link

    I think he's picking up on the point that this general purpose design is going to favour integer operations over floating point. Looking at this architecture from the perspective of someone wanting to perform a lot of floating point matrix calculus then the performance improvement of each "core" is going to be proportionally less than for integer calcs.

    So what he's saying is that quite clearly AMD believe that general purpose CPUs are just that and have designed for a well defined balance of FP and Interger operations i.e. If you want more FLOPS go talk to the GPU?
  • stalker27 - Tuesday, August 24, 2010 - link

    "And if Bulldozer comes any later, it will be up against the die shrink of SandyBridge, Ivy Bridge. Things dont look so good in here."

    Basically, you've contradicted yourself right here:

    "Most of us dont need SUPER FAST computer."

    True, and true.... Ivy will probably be faster than Bulldozer (speculatively) as is Nehalem to Stars, but most people, i.e. the "cash cow" won't buy these expensive products. Instead they will focus on mid to low end computers which by their performance is more then/or enough for their needs.

    So things might not look good in reviews and bench tops, but in the stores and on people's bank balances they will look pretty good.
  • jabber - Tuesday, August 24, 2010 - link

    Hooray!

    I'm glad at last some folks are waking up to the fact that having the fastest or most expensive CPU means absolutely jack!

    All the latest fastest CPU stuff just means a little bit more internet traffic for tech review sites.

    The rest of the world doesnt give a damn.

    All the real world is interested in is the best CPU for the buck in a $400 PC box to run W7 and Office on. AMD needs to get a proper marketing dept to start telling folks that.

    All AMD has to do is produce good performing chips for a good price. It dosent need a CPU to beat the best of Intel.

    The real world lost interest in CPU performance the minute dual cores arrived and they could finally run IE/Office and a couple of mainframe sessions without it grinding to a halt.

    I bet Intel gives out more review samples of its top CPU than it sells.
  • JPForums - Tuesday, August 24, 2010 - link

    "All the real world is interested in is the best CPU for the buck in a $400 PC box to run W7 and Office on. AMD needs to get a proper marketing dept to start telling folks that."

    "The real world lost interest in CPU performance the minute dual cores arrived and they could finally run IE/Office and a couple of mainframe sessions without it grinding to a halt."

    Apparently us Engineers aren't part of "The rest of the world".
    Try running products from the likes of Mentor Graphics, Cadence, and Synopsis for reasonably large designs. Check out what a difference each new CPU makes in PROe (assuming sufficient GPU horsepower). Run some large Matlab simulations, Visual studio compilations, and Xilinx builds. You don't even have to get out of college before you run into many of these scenarios.

    Trust me when I say that we care about the next greatest thing.
    An extra $1000 dollars on a CPU is easily justified when companies are billing $100+ per Engineering hour (not to be confused with take home pay).
  • BitJunkie - Tuesday, August 24, 2010 - link

    Exactly so: An example would be a 24hr calculation to perform a detailed 3D finite element analysis. This is not unusual using highly spec'd Xeon work stations from your vendor of choice.

    It might take 5 to 10 days to set up a model including testing of different aspects: Mesh density, discretisation errors, boundary effects, parametric studies. The set up time with numerous supporting pre-analysis runs is what really costs. Anything we can do to reduce this is worth while.

    The above would be the typical process BEFORE considering a batch-job on a HPC cluster if we wanted to look at a series of load cases etc.

    Time is money.
  • mapesdhs - Tuesday, August 24, 2010 - link


    I know a number of movie studios who love every extra bit of
    CPU muscle they can get their hands on. Rendering really
    hammers current hardware. One place has more than 7000
    XEON cores, but it's never enough. Short of writing specialised
    sw to exploit shared-memory machines that use i7 XEONs (which
    has its own costs), the demand for ever higher processing
    speed will always persist. Visual effects complexity constantly
    increases as artists push the boundaries of what is possible.
    And this is just one example market segment. As BitJunkie
    suggests, these issues surface everywhere.

    Another good example: the new Cosmos machine in the UK
    which contains 128 x 6-core i7 XEON (Nehalem EX) with
    2TB RAM (ie. 768 cores total). This is a _single system_,
    not a cluster (SGI Altix UV). Nothing less is good enough for
    running modern cosmological simulations. There will be
    much effort by those using the system on achieving good
    efficiency with 512+ cores; atm many HPC tasks don't scale
    well beyond 32 to 64 cores. Point being, improving the
    performance of a single core is just as important as general
    core scaling for such complex tasks. SGI's goal is to produce
    a next-gen UV system which will scale to 262144 cores in
    a single shared-memory system (32768 x 8-core CPUs).

    You can never have enough computing power. :D

    Ian.
  • stalker27 - Wednesday, August 25, 2010 - link

    You're 1% of the market... for you, Intel and AMD have reserved cherry-picked chips that they can charge you 1K for but at the same time offer you that needed speed. How's that?

    BTW, he said real world, not rest of the world. That makes you somewhat of an illusion. But don't take it the bad way... more like most of us would dream working in an environment full of hot setups, big projects and big bux, unlike in the real world where you have to mop the floor after debugging for 8 hours straight... if they don't force you to work extra two hours without pay, never-mind that before you start the workday you have to go to various bureaucratic public clerk offices to deal with stuff that was supposed to be taken care by secretaries... which got fired for no apparent reason some time ago.

    So stop moaning... you have it good, even as 1%.
  • Makaveli - Tuesday, August 24, 2010 - link

    lol if AMD and intel followed your logic we would all still be running Pentium 2 and socket A Athlons silly boy.

    You make yourself look like an ass when you make a generalized statement like that, as if you are speaking for the rest of the world.

    As that other guy pointed out some of us do more than just office work on our pc's!
  • stalker27 - Wednesday, August 25, 2010 - link

    Those "some of you" don't make them R&D money... which, silly boy that you are... got you those fast chips in the first place.

    Oh boy, how important some people think they are.
  • iwod - Tuesday, August 24, 2010 - link

    Deleted:
  • iwod - Tuesday, August 24, 2010 - link

    I am not a Professional Engineer, but i do have my degree in Electrics Engineering. I fully understand how Models and Simulations, MatLabs, Video Encoding, or CG Rendering requires as much Performance as it can get.

    But to the world, I am sorry you are right. You are exactly not counted in "the rest of the world". You are inside a professional niche whether you like it or not. That includes even Hardcore PC gamers, which is shrinking day by day due to completion from consoles. No this is not to say PC Gaming to going to die. It just means it is getting smaller. And this trend is not going to change until something dramatic happens.

    The rest of the world, counted by BILLIONS, are moving, or looking to move to iPad, Netbook, Cheap Notebook, or something that just get things done as cheaply as possible. It is the reason why Netbook took off. Why Atom based All in One PC took off. No one in the marketing department knew such Gigantic market exists.

    Lastly, i want to emphasis, by the "world" i really mean the World. Not an American view of the world which would just literally be America by itself. China, India, Brazil, and even countries like Japan are having trouble selling high end PC.
  • jabber - Wednesday, August 25, 2010 - link

    Yep, all you computer engineer folks and render farms etc. account for a very small minority in the "world of computer users". You are not mainstream users.

    In general terms the world isnt really that interested anymore in CPU performance improvements.

    Most folks out there just want smaller and lower power so they can carry a computer around with them. They dont give a damn what the CPU architecture is.

    The leviathan CPU approach by AMD and Intel could go the way of the dinosaur for mainstream computing. ARM could well be the new mainstream CPU leader in just five years.

    Just think outside your own little box.
  • B3an - Wednesday, August 25, 2010 - link

    Ridiculous small minded comment.

    Render farms and the like may not be mainstream, but gaming is, then theres things like video encoding, servers, workstations, databases, all very popular mainstream stuff that millions of people use and the internet also relies on.

    A very large percentage of computer users will always want faster CPU's.

    If Intel or AMD did what you think most people want, then nothing would progress either. No 3D interfaces, no artificial intelligence, no anything, as the power needed for it would never be there.
  • BitJunkie - Wednesday, August 25, 2010 - link

    It all comes down to usage models right? The point is that AMD and Intel are trying to capture as many usage models as possible within a given architecture.

    This is why modular design is kind of appealing - you can bolt stuff together to hit the desired spot in the thermal-computational envelope.

    The thing that "engineers" fall foul of is that there is a divergence going on. On the one hand general computing is dominating, with a desire to drive down power usage. On the other hand there is the same appetite for improved computational performance as we get smarter and more ambitious in the way we tackle engineering problems.

    The issue is that both camps are looking to the same architecture for answers.

    The reason why that doesnt work for me is that some computations just don't benefit from parallelism - more cores doesn't mean more productivity. Therefore I want to see the few cores that I do use become super efficient at flipping bits on a floating point calculation.

    Right now there's no clear answer to that problem - but it will probably come with Fusion and the point at which the GPU takes the role that math co-processors did before being swallowed into the CPU. For this to work we need Microsoft to handle the GPU compute stuff natively within windows so that existing code can execute and not think about what part of the hardware is lifting the load.

    Therefore my sincere hope is that GPUs will become the new math co-processors and Windows 8 will make that happen.

    Oh, and there's no need for any tribalism here wrt to usage models. It's all good.
  • jabber - Wednesday, August 25, 2010 - link

    No its not small minded. Its looking at the big picture.

    The big picture is that for most users their CPU power needs were reached and surpassed some time ago.

    CPUs are not the bottle neck in modern PCs. Storage systems are.

    We need better, cheaper and faster storage.

    I've been pushing out 1.6Ghz dual core Atoms to 95% of my small business customers and a good chunk of domestics for the past year.

    I havent had one moan or complaint that the PCs were not fast enough. Very few customers are hardcore gamers. Gamers are still a small subsection of the computing world.

    I'm not asking AMD/Intel to stop research in new and faster CPU designs. Keep going boys its all good.

    I'm just saying that the majority of mainstream computing lays along a very different path going forward to those that require power at all costs.

    Not all of us need octa-cores at 4Ghz+. A lot of us can get by with a 2Ghz dual core and a half decent 7200rpm HDD.

    Most of the PCs I see are still single core. Folks are managing just fine right now.

    Plenty folks are now managing with just 1Ghz or less on a mobile device. Thats why Intel are taking ARM more seriously as they see that future mainstream being more low power, mobile based than leviathan mega-core-mega-wattage beasts.

    Things will change rapidly over the next three or four years.
  • Aries1470 - Friday, August 27, 2010 - link

    Quote:

    "We need better, cheaper and faster storage.

    I've been pushing out 1.6Ghz dual core Atoms to 95% of my small business customers and a good chunk of domestics for the past year.

    I havent had one moan or complaint that the PCs were not fast enough. Very few customers are hardcore gamers. Gamers are still a small subsection of the computing world.

    Well, I for one totally agree. I purchaed last year an Atom 330 dual core, and it does more than enough.
    I already had a much more powerful system, of which I use about.... one or twice a month if that! It is a quad core, has 4 gigs and a 2gb 9600 gpu.

    I have moved away from gaming and encoding and all that stuff.

    The motherboard I have is:
    ATOM-GM1-330 of which I imported to Australia from the U.S.A., since the distibuter here does not bring this model.
    I have paired it with 4Gb of memory, but using only 3gb, since I am running 32bit systems (XP & win 7)
    A Blu-ray writer....
    and a LP 5450 used with an adapter from 16x -> 1x

    It plays blu-ray great while browsing at the same time!
    I browse the internet at the same time as my wife and kid watch a movie on the 50" plasma, with NO stutter.

    Needless to say, I got it as a secondary pc... and has become my main pc. It is left on basically 24/7, with NO fan on the cpu! Low power consumption too.

    It performs great for the functions I want, and can even play Civ IV on it... but not much else. If I want to play real gaming, I use my other pc.

    So for what it is, it works great for my needs! No useless power consumption, does its Boinc too, albeit slow, but still better than my older P2-550... that was still alive a few years ago.

    Most people I know, don't use their pc for gaming anymore, mostly for facebook/ twitter and video calling, they have their Wii's & Xbox and one has a PS3...

    Ok, end of rant, but to conclude, I concur, your average Joe, has his gaming machine, and his pc is for htpc or not a gaming power pc.
  • gruffi - Thursday, August 26, 2010 - link

    Sandy Bridge, the architecture that is suppose to leap again like Pentium 4 to C2D? Thanks for the joke of the day.

    Sandy Bridge looks more like a minor update of Nehalem/Westmere. More load/store bandwidth, improved cache, AVX and maybe a few other little tweaks. Nothing special. I think it will be less of an improvement as Core 2 (Core successor) and Nehalem (Core 2 successor) were.

    In many ways Bulldozer looks superior to what Intel has to offer in 2011.
  • Lonbjerg - Tuesday, August 24, 2010 - link

    I don't care for "Bobcat"...mediocre performance in a cramped formfactor (netbooks) have as much interest to me as being dragged naked across field filled with broken glass.

    The "Bulldozer" looks fine on paper...problem is that so did Phenom.
    I look forward to the real reviews, and not PR slices :)
  • Dustin Sklavos - Tuesday, August 24, 2010 - link

    Comments like this really bother me. You may not care about netbooks, but a lot of people do. Current ones don't pass the grandma test - your grandmother can do whatever task she needs to on them, like check e-mail, browse the internet, watch HD video - and any advance here is welcome.

    Generally speaking a netbook is not supposed to be your main machine, but something you can chuck into your bag and take with you and do a little work on here and there. I write a lot, and have to work on other peoples' computers from time to time, so a netbook that doesn't completely suck is invaluable to me. Netbook performance is dismal right now, but Bobcat could successfully fix this market segment.

    So no, you're not interested in netbooks and you'd rather be raked through hot coals than purchase one. But that just means they're not useful - TO YOU. There are a lot of people here interested in what Bobcat can do for these portables, and I count myself among them.
  • Lonbjerg - Wednesday, August 25, 2010 - link

    I don't care that many people care for mediocore performance in a crappy format.
    Not matter what you do with a netbook, it will alway be lacking.

    I don't care what gandma wants (she will buy intel BTW, due to Intel's brand recognition)

    I don't care for Atom either.
    Or i3
    Or i5
    Or Phenom
    I do care about a replacement for my i7 @ 3.5GHz...
  • Dustin Sklavos - Wednesday, August 25, 2010 - link

    I'm trying to figure out why you're commenting on any of this at all.
  • flipmode - Tuesday, August 24, 2010 - link

    Seriously Anand, it is crummy that I cannot find a whole section of your website. I hate to spam an entirely separate article, but how completely lame it is to have to spend 15 minutes doing a Google advanced search to find the Anandtech article I'm looking for.

    One of the very, very few truly Class A+ hardware sites on the internet - you can count all the members of that class on one hand - and you make it seriously hard to find past articles and you completely OMIT a link to an entire category of your reviews. Insane.

    Please put a link to the "System" section somewhere. Please!
  • JarredWalton - Tuesday, August 24, 2010 - link

    Our system section hasn't had a lot of updates, but you can get there via:
    http://www.anandtech.com/tag/systems

    In fact, most common tags can be put there (i.e. /AMD, /Intel, /NVIDIA, /HP, /ASUS, etc.) The only catch is that many of the tags will only bring up articles since the site redesign, so you'll want to stick with the older main topics for some areas. Hope that helps.
  • mino - Tuesday, August 24, 2010 - link

    "so I’m wondering if we’ll see Bulldozer adopt a 3 - 4 channel DDR3 memory controller"

    Bulldozer will use current G34 platform. Hoe that answers your wonder :)
  • VirtualLarry - Tuesday, August 24, 2010 - link

    BullDozer sounds like amazing stuff. I wonder, if the way that they have arranged int units into modules, if that means that we will be getting more cores for our dollars, compared to Intel. More REAL cores, I mean. I'm just a little disappointed that the int pipelines went from 3 ALU to 2 ALU, I hope that doesn't affect performance too much.
  • gruffi - Thursday, August 26, 2010 - link

    Integer instruction pipelines are increased from 3 to 4. That's 33% more peak throughput. The number of ALUs/AGUs to keep these pipelines busy is meaningless without knowing details. K10 has 3 ALUs and 3 AGUs, but they are bottlenecked and partially idling most of the time. Bulldozer can do more operations per cycle while drawing less power, even with only 2 ALUs and 2 AGUs. How can that be disappointing?
  • ezodagrom - Tuesday, August 24, 2010 - link

    I think Bulldozer has the potential to be really competitive, mainly because Sandy Bridges looks quite unimpressive.
    In a recent leaked powerpoint from Intel, apparently until Q3 2011 the best Intel CPU is still going to be Gulftown based, possibly Core i7 990X. According to Intel benchmarks on the leaked powerpoint, the best Sandy Bridge, that is, Core i7 2600, apparently will be around 15% to 25% better than the i7 870, with the i7 980X being 25% to 35% better than the i7 2600.
  • Mat3 - Tuesday, August 24, 2010 - link

    I have a question.. it was earlier speculated that BD would have four ALU pipelines per integer core. It was thought that one way they could make use of them was to send a branch down two pipes and take the correct result. Obviously this isn't the case, but my question is, why not? Wouldn't it be better to do that and just discard the branch predictors entirely? Why isn't that better?
  • Zoomer - Wednesday, August 25, 2010 - link

    Basically you'll need 2x the power for much less than 2x performance increase. Modern branch predictors can have very good hit rates ~90%+. It simply made more sense to use the second int unit for another thread.

    However, if you need the absolutely best single threaded int performance at all costs, imho, what you suggest wouldn't be bad. In fact,
  • Edison5do - Tuesday, August 24, 2010 - link

    Finally besides the price competition, we will be able to see some tech competition, we have to raise our praise for AMD not to reject the ATI btand because New and HiTech CPU´s, should be paired with HiQuality, nice priced, Radeon GPU´s.

    I really dont think People are ready to see "AMD" Brand as a Head-toHead Competitor to "INTEL" Brand, by this i mean that they should rely on ATI for being well accepted by the public for more time before they even star thinking about that.
  • angrysand - Tuesday, August 24, 2010 - link

    they may have had the on die memory controller, but Atom basically created the netbook market. AMD is just improving on what Intel help create (and that remains to be seen).

    I had to see AMD go because I like having resonable performance for reasonable price. But they had better get their act together and put out faster CPU's.
  • ABR - Wednesday, August 25, 2010 - link

    Atom did not create the netbook market, some convergence of wireless data and increasing use of the web by non-computer folk did. The first "netbook" products were the Crusoe-based mini-notebooks starting in 2001. Unfortunately for Transmeta, interest in the high-portability / long battery life model was low, only a couple of models even came out, and they ended up having to compete with Intel for scraps of the low-end laptop market. They lost, and Intel only finally caught up with their technology later with the Atom, when, coincidentally or not, the market was finally ready.
  • Nehemoth - Tuesday, August 24, 2010 - link

    Why Bodcat will be manufactured in the 40nm process instead of 32nm is cause the GPU?.

    Why will be manufactured on TSMC instead of GlobalFoundries?.

    I supposed that this could be a problem with GF not being ready in 32nm but can we see a switch from TSMC to GlobalFoundries after Bulldozer begin to be manufacture?.
  • iwod - Wednesday, August 25, 2010 - link

    TSMC has much higher 40nm capacity then GF's 32nm. Bobcat is going to be a low end product which will hopefully generate high volume of sales. TSMC in this case will be a much better fit then GF.
  • moozoo - Wednesday, August 25, 2010 - link

    I wonder how hard it would be to make a version has two Floating point cores and one integer core.

    Will AMD have a product to match Intel MIC's (Larrabee) .
    (http://www.anandtech.com/show/3749/intel-mic-22nm-...
  • YuryMalich - Wednesday, August 25, 2010 - link

    Hi,
    There is a mistake on page 5 on this picture http://images.anandtech.com/reviews/cpu/amd/hotchi...
    There were drawn two 128-bit FMAC units on Phenom II Microarchitecture.
    But K10 processor doesn't have FMAC units at all! It has 1 FMUL and one FADD and one FMISC(FLOAD) units.
    The FMAC (multiple-add) units are new in Bulldozer microarchitecture.
  • Jack Sparow - Wednesday, August 25, 2010 - link

    "Ivo August 25, 2010
    How many threads everyone processor (“Interlagos”, “Valencia” and “Zambezi”) can do simultaneously per core compare with Phenom II processor?

    Reply
    John Fruehe August 25, 2010
    One thread per core."

    This quote is from AMD blogs home. :)
  • silverblue - Wednesday, August 25, 2010 - link

    I think I touched on this before once on a THQ news article - John Fruehe is being confusing. The correct definition of a complete Bulldozer core is a module, which is a monolithic dual-integer core package also consisting of other shared resources - the top image on page 4 of this article is a great guide. So, a four module (or quad core as we currently term them) Bulldozer will handle eight threads concurrently as those four cores possess eight integer cores.

    As such, I don't see non-SMT Bulldozer cores ever coming out.
  • Mr Perfect - Wednesday, August 25, 2010 - link

    It sounds like AMD will be selling by the integer core though, not by module. There's this from Page 4:

    "Processors may implement anywhere from one to four Bulldozer modules and will be referred to as 2 to 8 core CPUs."

    So they will be referring to four module APUs as having eight cores, rather then a quad core with HyperThreading.
  • silverblue - Wednesday, August 25, 2010 - link

    Sorry, I did mean to tackle the part of your thread dealing with different versions of Bulldozer. Valencia is a server version of Zambezi, i.e. 4 modules/8 threads. Interlagos is 8 modules/16 threads.

    From AMD's own figures, each module is 1.8 times the speed of a current K10.5 core at the same clock speed. It is a little unfair to compare "core" to core due to the way they're designed and implemented. Considering each K10.5 core has three ALUs and Bulldozer has two per integer core, 90% of that integer performance is very good - for a quad core CPU in the current sense, Bulldozer would theoretically outpace Phenom II by 80% in integer work by only having 33% more integer resources, assuming the chip is well fed. If the rumours about a quad-channel memory bus are correct, you'd hope it would be.
  • jeremyshaw - Wednesday, August 25, 2010 - link

    I believe Intel also delegated some Atom production to TSMC, unless if I am wrong?
  • Penti - Thursday, August 26, 2010 - link

    TSMC also does manufacture VIAs / Centaur Tech x86 processor.

    Probably a few others too. There's some x86 SoCs for embedded stuff from other vendors.
  • Perisphetic - Wednesday, August 25, 2010 - link

    It's time to kick ass and chew bubble gum... and AMD is all outta gum.
  • NaN42 - Wednesday, August 25, 2010 - link

    At first: I think AMD made a huge progress with Bulldozer.
    But I'm wondering how the FPU will work exactly. A look at the latencies (especially of fma-instructions) would be interesting too. Another question is, if it is possible to start one independent multiply and one addition at the same time in a FMAC-unit. Furthermore the throughput is of interest. Is it one mul and add instruction per cycle? Is there any advantage to use 256 bit AVX-instructions, besides shorter code?
    I appreciate that AMD will drop most 3Dnow-instructions because these are just outdated. Perhaps they could also drop MMX instructions but maintain x87-instructions because these are sometimes useful and needed.

    I expect the decoder besides the FPU (compared to Sandy Bridge) to be another bottleneck because the 4-wide decoder has to feed two nearly independent cores and todays 3-wide decoders (except those in Nehalem/Westmere) are sometimes a bottleneck in a single core design.

    @Ontario: I expect this platform to be much more powerful than the Atom platforms. Perhaps it will even be much more efficient than Atom. A direct comparison between Ontario and VIA Nano 3000 might be interesting especially when VIA releases dual core chips.
  • GourdFreeMan - Thursday, August 26, 2010 - link

    It seems that AMD is ceding the traditional laptop and desktop market to Intel and chasing the server market and Atom/ARM's market with Bulldozer and Bobcat respectively. Lower theoretical peak IPC and greater parallelism target well the high level of data and transaction level parallelism in the server market, but existing consumer software excepting video encoding and a handful of games still tend to favor single threaded performance over parallelism. I suppose we should wait for benchmarks in actual applications to see how well architectural improvements have impacted the performance of AMD's new designs, but I imagine some people are already disappointed. Too bad the resources in both integer cores in a module can't work on a single thread, otherwise we could have had a very serious contender on the desktop...
  • silverblue - Thursday, August 26, 2010 - link

    He sure seemed confusing on the comments page of his blog a few weeks back. Understandably evasive considering he's a server tech guy, not consumer tech, plus AMD were yet to reveal these details, but he was comparing 16 Bulldozer cores to 12 Magny Cours cores, which is technically incorrect as they're not comparable UNLESS you're talking about integer cores. At least, that's my interpretation.

    AMD will probably market Zambezi as an 8-core CPU in order to woo the more-is-better crowd, but regardless of how it handles multi-threading, I still view a module as an actual core virtue of the fact that the "cores" are not independant of the module they belong to. I know I'm wrong and that's fine, but it helps in understanding the technology better - eight cores that exist in pairs and share additional resources might serve to confuse.
  • gruffi - Thursday, August 26, 2010 - link

    A 12-core Magny-Cours has 12 "integer cores" and 12 128-bit FPUs. A 16-core Interlagos has 16 "integer cores" and 16 128-bit FMACs. Why is it technically not comparable? At least you know you are wrong. ;)
  • silverblue - Friday, August 27, 2010 - link

    The implementation is very different to what AMD have done before, that's what I'm trying to get at. Everyone knew that despite Intel and AMD having different types of quad core processor prior to Nehalem, they were still classed the same so I suppose it doesn't matter in the grand scheme of things. There's nothing to stop AMD from releasing a 24-"core" Bulldozer; it shouldn't be any larger than Magny-Cours - perhaps slightly smaller in the end - yet its integer performance would be through the roof.

    However, people are bemoaning the fact that for 33% more "cores", AMD are only getting 50% extra performance - it's worth bearing in mind that AMD does this with 4 less, albeit better utilised ALUs than Magny-Cours (32 compared to 36). Make no mistake, Bulldozer is far more efficient and capable in this scenario, but I can't help wondering how strong Phenom II may have been if it'd had a slightly more elegant design.
  • ROad86 - Thursday, August 26, 2010 - link

    I think without being a pc expert that amd was trying to maximaze the multi-thread perfomance in less die size and being more efficient at power consumption. But i believe that they are still developing Bulldozer in order to maximaze single thread perfomance too. In desktop not much applications are threaded well in enough so they have to be competive in single thread perfomance too. Thats why I believe they dont announce release date yet. Among side the new manufactaring procces at 32 nm and I think the waiting for the release of sandy-bridge in order to see how better are intel new processors, the release date will be probably Q4 2011. But these are just speculations.
  • Vallwesture - Thursday, August 26, 2010 - link

    It has been over two years...
  • ROad86 - Thursday, August 26, 2010 - link

    New architecture, completly new design, maybe softaware too needs too be optimazed(windows 7 for example), in the end lets hope to bring something truly amazing. On paper it does but lets wait for reviews!
  • KonradK - Thursday, August 26, 2010 - link

    "The basic building block is the Bulldozer module. AMD calls this a dual-core module because it has two independent integer cores and a single shared floating point core that can service instructions from two independent threads"

    I'm curious whether CPU shedulers can distinguish between cores located in the same module from cores located in other modules of Bulldozer .
    Because two cores located in the same module share one FPU unit , running two FPU heavy threads on two cores located in the same module and leaving cores in other modules idle would be at least unoptimal.
  • Simen1 - Tuesday, August 31, 2010 - link

    From page 6: "Aggressive prefetching usually means there’s a good amount of memory bandwidth available so I’m wondering if we’ll see Bulldozer adopt a 3 - 4 channel DDR3 memory controller in high end configurations similar to what we have today with Gulftown."

    AMD already have a 4 channel DDR3 design. Its in the Opteron 6100-line of processors on the G34 socket (LGA1974). AMD have promised it will be compatible with future bulldozer-based processors.
  • liem107 - Monday, September 6, 2010 - link

    I wonder how bobcat would fare against the VIA Nano. Considering VIA s portfolio, it would be a good aquisition for Nvidia for example to get their hands on a fairly good x86 core and license.

Log in

Don't have an account? Sign up now