The Hot Fix is better than No Fix and Win 8 beta looks to be a few percent better than the Win 7 Hot Fix. So it's all good but nothing startling. Combined with Vishera/Piledriver should provide a nice performance bump however.
A fix than bring no additional value (less than 2%, well below statistical significant value), and at the same time, by its simple presence, introduce the potential of conflict and instability (simple law of entropy) can only be detrimental to a system, not positive.
Another way to look at it: If my car is broken to the point of being unusable and I apply 1500$ worth of parts and labor and in the end it make no difference in its usability, why spend the 1500.00$ in the first place? New parts are better than no new parts?
I love AMD as a company, I really do. The truth is Bulldozer in conjecture with present operating system is broken and dysfunctional with embarrassing sub-par performance and inacceptable power consumption per instruction for what is supposedly a 8 cores CPU that cannot even approach a 4 cores 2500K performance in 90% of scenarios. AFAIK, that applies to ALL operating system including Linux and Mac. I even have my doubt a perfectly Bulldozer tuned OS would be able to compete with Intel offering. I'd like nothing better to be proved wrong on that one.
AMD knew from day 1 (years ago) that their new unconventional architecture would face issue such as this one, but it’s only now, months after the actual launch of the product that they are working on fixes that fix nothing? Great example of bad management / strategy / planning and lack of foresight from AMD. I hope they have learned a lesson because underdog can’t afford such huge misstep too often and hope to stay alive in the long run.
Power consumption for given "mixed workloads" should be altered.. increased, actually, since more modules are active. Overall energy consumption, on the other hand.. I don't know. More power draw but for a shorter time.
I fail to see what was fixed, i give them credit for adding 2 more fps in x264 second pass. But come on this is just stupid. their is no performance increase that is going to be noticed or seen by this so called hot fix. We already accept the fact that bulldozer failed on the desktop in it's first showing, why remind us again. Things like this make people lose confidence in you, just lay low and try to improve Piledriver.
Because this won't probably help future AMD CPUs? It's something Microsoft will need to deal with, and possibly already have with Win 8. It's free performance, and it helps pretty much test future similarly equipped AMD CPUs.
What are they supposed to be doing, other than giving free performance gains to Bulldozer users?
well if you think the hotfix is worth wild, then i guess AMD has done it's job. Their is always more people that are easily fooled than the ones who can see the truth. Free performance??? Give me a freaking break, i waited years for bulldozer, and your going to tell me about a cpu that cost more than an i5 2500 and losses in almost every benchmark and games.That this hotix is free perfomance, haha. Zambezi users are paying a lot for this so called "free" when even i5 2500 beat the crap out of it, i guess sandy bridge has a lot of "free" performance too.
The Win 7 Hot Fix speaks for iteslf. As noted it's a small bump - but it's free. It's not AMD's fault that Microsucks O/Ss sucks. It's reported Linux does a better job of scheduling, probably because it's used on a lot of servers with heavy work loads.
I always tell people to buy what makes them happy. If you're happy with a product from a convicted criminal corporation and chose to support their efforts to eliminate consumer choice and drive up PC hardware prices - that's your choice and you're perfectly free to do so. Bashing AMD is not going to change reality however, no matter how disappointed you are in them.
In reality ANY of todays current model CPUs have more than enough computing power for 90+ percent of PC users. If all you do is run benchmarks then you could be misinformed...
Funny, considering different Linux distributions use different schedulers. Lets not also forget there is OVERHEAD to doing a lot of processing within the scheduler, and keeping track of thread/resource use can be a pain.
Huh? The process scheduler in Linux is dependent on which version of the kernel you have. Any current distribution should be using CFS. You may be confusing this with the options of IO schedulers.
I cant believe people are still blaming Microsoft for bulldozers failure. It seems to me that the responsibility of a company is to bring out a product that works in the current environment, i.e. that works efficiently with win 7. Especially when you control a small portion of the market, you should make a product that "just works". You shouldnt expect the software to be rewritten for your product. And Intel doesnt seem to have any problem making processers that work efficiently with the current environment.
I'd have to believe that any CPU with SMT enabled will benefit. That is, unless they already have this feature. Of course, Intel has been shipping SMT processors since P4. I'd like to believe that microsoft simply flipped whatever switch to treat bulldozer cores as SMT cores, but I don't have enough faith in microsoft's scheduling to believe they ever got it right.
Even if if a scheduler did take the time to figure out when threads shared a significant number of recent memory accesses would that be enough information to determine that the thread would perform optimally on the same module as a related thread rather than an unused module?
Also... Wouldn't running code that performed "intelligent core/module scheduling based on the memory addresses touched by a thread" negatively impact performance far more than any gains realized by scheduling threads on cores that are merely suspected to be more optimally suited to running each particular thread?
"Bulldozer's module consist 2 integer cores and 1 floating point (FPU) core."
However, the 1 FPU core can be used as two single floating point cores or a single double double floating point core, so it depends on the floating point data running through it.
Not sure what you are supposing. Precision is the same, regardless of fact whether one or two threads are executed by FPU core. There are single or double precision FPU instructions, but aech thread can use any of them. However if you mean single or double performance: If two FPU threads will run on the same module each of them will have half of performance in comparision tothe same two FPU threads running on separate modules. Just in first case one FPU is shared by two threads. And it is whole point in the hotfixes - avoiding such situation as long this is possible.
"Ideally, threads with shared data sets would get scheduled on the same module, while threads that share no data would be scheduled on separate modules."
I think that it is impossible for sheduler to predict how thread will behave and it is not practical to track the behaviour of running thread (tracking which areas of memory are accessed by threads would be so computational intensive as computational intensive is emulation). So ultimately there is choice between "threads should be scheduled on separate modules if possible" or "do not care which cores belongs to the same module" (pre-hotfix behaviour). Second means that Bulldozer will behave as PIV EE (2 core, 4 threads) on Windows2000, at least for threads that uses FPU heavily. Windows 2000 does not ditinguish between logical and physical cores.
I've noticed that windows doesn't always schedule jobs well to take advantage of Intel Turbo Boost.. I realize that it probably doesn't have a noticeable level of impact, but I do notice that running only 1 thread of high-cpu-utilization still doesn't often kick turbo above the 3/4 cores active frequency.. I can use processor affinity on the various common background tasks to pull them all to 1 or 2 cores to activate full turbo, but if a process is only using a percent or so of cpu resources, why schedule it to an otherwise-inactive core if there is an already-active, but 98% un-utilized core available? I think the power gating efficiencies would actually be more useful than the pure mhz-related turbo efficiencies (Running 2 cores 100Mhz faster is probably going to waste less power than you gain by shutting down the other two cores completely/partially).
No, Windows already leaves virtual threads from hyperthreading alone until all the physical cores are used, so this won't improve things on the Intel side any. This is specifically for Bulldozer and future architectures like this.
Bullshit! Win7 had nothing to do with Intel HT until AMD hit them in the head!
I had so much asspain with Win7 shity CPU scheduler on FEM and FDTD simulations.
8HT-core setup just reduces overall performance up to 50%(a HALF!!!) comparing to NO-HT setup. Simpel task manager checkup showed that Win7 just was putting low-threaded processes on the same core without an option. Just simplest increment scheduler they have for Intels.
So, it is recommended to use AMD optimization patches (only the core-addresing one, not the C6 state patch) on any Win7 machine using simple multithreaded mathematics.
Shared cpu modules that have to compete for resources? Reminds me of HT v1. IMO this is basically a quad core chip with the other 4 threads available in the primary core isn't being used all the way. I've looked at the design and it's just nonsensical. This is not a futuristic bet but a desperate attempt at differentiation...with most likely disastrous results. AMD has now painted themselves into a niche product instead of a high performance general purpose cpu.
I like the design of the Bulldozer overall, but there is obviously a bottleneck that is causing problems with this chip. I'm thinking the bottleneck is likely the decoder. it can only handle 4 instructions per clock cycle, and feeds 2 full Int cores and the FP unit shared between the two cores. I bet increasing the decoder capacity would show a really big increase in speed. What do you think?
I think that if it was something easy, AMD would have done it already or are in the process of doing it.
I also think that it's unlikely that it's something as simple as improving the decoder throughput, because one would think that AMD would have tried that pretty early on when evaluations of the chip showed that the decoder was limiting performance.
These chips are incredibly complicated and all of the parts are incredibly interrelated. The last 25% of IPC is incredibly hard to achieve.
The FX-series chips have been shown to be RAM-dependant as well. Strangling these things with 1600MHz RAM when AMD themselves recommend a minimum of 1866MHz for these chips, but recommend 2166MHz RAM.
Anand, perhaps you should show charts that demonstrate whether you're able to repeat these claimed results?
I hate how press and AMD try to play off that a bulldozer module is like a dual core or similar to SMT... IT IS NOT.
Its simple, there is a single Front end in the module to fetch instructions, this defines the module to be a single "core".... adding integer pipes and making the pipe wider or adding more execution units is simply super scaler architecture they are trying to make wider... Id like to see their performance analysis because i the ROI is gone when you make your core too wide and thats what they did and they get tremedously underutilized and waste power... GG AMD.
"You can either do two 128-bit operations at once, or one 256-bit operation at once, on a single module. "
This means also that two 128bit operations from one thread may be executed at once (as long second does not depend on result of first). So ultimately benefits of Bulldozer's architecture are no more and no less than benefits than comes from any Hyper-Threading. Two threads may run on one core at its full speed as long as any of them does not utilize execution units in more than 50% (that is true for not well optimized, and some other specific, programs). They will not run faster than the same threads running on separate cores.
The problem here is that the scheduling isn't the problem.
It's the single core (per clock) performance. This thing is more expensive than a 2500k and yet has to compete with it.
In all honesty, if you are looking for a gaming PC that is still decent in other apps, I'd recommend a 1050T and spend the savings on a better GPU. With an AM3+ motherboard, you can wait for AMD to fix the hardware.
I don't need it, I run i5 2500. It's kind of amazing that people are still confused about whether bulldozer on the desktop failed. If you are one of those people who think that Bulldozer is worth the money, then you deserve to buy it. Denying the truth just makes all of you look stupid. Bulldozer only manage to bulldoze itself, sales figures doesn't matter to me, because I am not AMD nor do I have stock invested in them, so sales figures is irrelevant to most people except AMD, and stock holders. It's because of people like you that kiss AMD's ass for all of the there highend failures, that even to this day Amd doesn't see the need to improve themselves because people would buy whatever they make. What a bunch of fools you all are.
I think his point is more that while this does improve bulldozer performance a little bit, it doesn't do anything to make it anywhere near competitive with Intel's offerings. Therefore the product is fairly useless.
As always, people neglect price when they describe product merits. Bulldozer is not a "fairly useless" product because it competes well with Intel CPUs in its price range. No, it does not compete with high end Intel CPUs, but those are not in its price range. AMD has no choice but to price the part so that its value proposition is comparable to Intel parts of the same performance, otherwise the part would not sell. This may make AMD less money but for the consumer, it is a fine CPU in its price range.
Just checked NewEgg and an 8150 is $40 more expensive than a 2500K and $30 less than a 2600K. -so if falls right in between on price.
Looking at the benchmarks, if gaming is not your primary concern then on a performance basis, the 8150 competes roughly with the 2500K. If gaming is your primary concern then the 2500K is the clear winner on performance.
So best case the 2500K and 8150 are evenly matched and worst case the 8150 gets trounced. So it has no business being priced $40 higher. To me, to be competitive it needs to be the same to lower price than a 2500K. And if you are someone willing to spend $40 more, you might as will just go $70 more for the 2600K.
So I just can't see how anyone can say it compares favorably with the Intel products in its price range.
With an Intel i3 2100. I suspect the gamers might actually be better served with the i3 than AMD's 8150 other than the inability to overclock the i3. Think of how much better a GPU you could get by saving about $150 by going with an i3 over an 8150.
AMD did a blind test with the 2700k and 8150. 28% chose 2700k, 51% chose 8150, 20% undecided. It is a possibly an anomaly that the 8150 got more votes, but it at least shows that it is competitive as a gaming CPU, even compared to the 2700k which costs $100 more.
IIRC the 8150 is competitive and sometimes beats the 2500k and even beats the 2600k in some highly threaded situations.
Your figures are wrong. Intel has the 2600k for $325 and the 8150 for $270. This means the 8150 is $65 less than the 2600k, not $30. It is $40 more than the 2500k but you have to consider generally higher cost of motherboard/memory with Intel, and guaranteed socket incompatibility with every CPU.
Wasn't the point of the hotfix to consolidate threads onto modules, so that modules could be gated and turbo core enabled? Isn't that where the performance boost is supposed to come from?
Windows 7 sheduler since beginning tries to group many threads on single core. It is mainly for maximize effect of power gates. I'm not sure whether core must be completely idle for turbo core/turbo boost to be enabled. Purpose of hotfix was to make sheduler distingushing between cores beloging to the same, or separate module.
This is what I thought as well. I remember reading somewhere when Bulldozer launched that Win7 prefers to primarily direct all threads to cores inside different modules. This would cause modules not to be able to enter C6 sleep and therefore the performance improvement of Turbo Core would be drastically cut. However since each module does not need to share resources (while each module is only using 1 core) performance is picked up there.
However this contradicts what this article says about pre-hotfix scheduling. Could someone clarify how pre-hotfix scheduling worked?
As far as I'm aware 1600 is the most developed ram at the moment. I consider anything faster as simply overclocked 1600; more "bandwidth", more voltage, but lower cas latency. I'm not sure what the reviewers were thinking, but they probly know more than most of us...
"No. Bulldozer adds silicon that actually executes instructions. [...]"
Whole idea of Hyperthreading is to let a second thread utilize resources of core wasted by first thread (and vice-versa). Depending on fact how much the execution units are replicated and how (un)optimally written is code amount of wastes (and benefits from Hyperthreiding) will be lower or greater. Maybe the execution units of Bulldozer's FPU are replicated to the such excent that it will be wasted in most cases unless used by two threads simultaneusly. But performance of two (FPU intensive) threads running on the same module will never be equal to the performance of two threads running on separate modules.* Otherwise hotfixes to sheduler would be useless. *) Assuming that CPU is not fully loaded i.e. some cores remains idle.
This kind of problem, more intelligent schedulers for a new architecture cries for open source to be the experiment and proofing ground.
So I'd like to know what the scheduling behavior of Linux, BSD (and in extension Mac OS X, once Apple does use the architecture) is? Has AMD any experience? Do they work with any Universities to find optimal algorithms for this new architecture?
If such new architectures where the "core" concept blurs will be more common in the future, there is sure some research that can shed some light on this topic. Does any body know?
I had to figure out what to do for our secondary system. Eventually I decided on the FX6100 which I picked up for $145... so a little bit more then a midrange I3. The board I got was a Asus M5A EVO for $110. That system is pretty fast... and I think overall a little better then what I'd have gotten out of an I3. I am on a i5 2500K everyday.. and before that a i7 920 setup..
These new FX proccessors are not what some reviewers make them out to be. Their actually pretty good for their price range. Are they going to win any awards? Likely not.. but for most of us your not going to be pulling your teeth and whining about it being slow because their not.
I wonder if the fact that the memory controller is running at 2-2.2 Ghz instead of the full cpu speed on Intel (the uncore part) and the cache latency is higher on AMD maked also FX cpus not competitive in single threaded tasks?
Regarding the memory speed, I remember that high speed DDR 3 is required for the AMD APU line, not the FX line...
I recently changed my gaming machine from a Phenom II X2 BE 3.3 Ghz (ran at 3.8 Ghz) to a Core i3 2120 and although some tasks in Windows seems a little slower (like browsing with a lot of pages opened - I have no ideea why - maybe the amount of cache PII had compared with the i3?), the gaming (Battlefield 3) improved a lot.
I wanted to go FX route, but simply looking on some gaming benchmarks made me go Intel (and the fact that I found a cheap Z68 Gigabyte board, people are not taking into account that sometimes a good Z68 board is twice the price of a good AMD board). The other components were a 60 GB SSD / 500 GB HDD, 8 GB DDR1600 and a Radeon 6870.
What I wanted to point out is that AMD does not compete through price properly with the FX line. A Core i3 2120 is about the same price like an FX 4100 and a Core i5 2500k has a lower price than an FX 8150. Only a high end Z68 board is (much) more expensive than an 8xx/9xx AMD AM3+ board...
I've seen linux benchmarks, before any patch, which show the 8150 perform much better compared to the 2500k. Check the Phorinox benchmarks (unfortunately there's no 2600k in them).
Well no. The point here is that kernel patches don't really have much of an effect on Bulldozer performance. There's no 'magic bullet'.
If the 8150 performs better compared to the 2500k in linux, that is a different story. How does performance in linux compare to Windows? It could also be that linux is just slower on the 2500k than Windows is (which I'm quite sure to be the case). So no, I'll just blame AMD. The Bulldozer module architecture just doesn't work.
How come this was not developed in association with windows BEFORE release....???? That's like releasing a new GPU without having a firmware that runs the new feature set.
I sure hope the recent house cleaning at AMD got rid of some of these upper level jack@sses responsible.
Has droppend from over by a factor of 5 on my system after applying the hotfixes. Can someone else confirm this? And possibly on an Intel system as well?
One problem this cpu has had from day one has been it's overpriced by newegg and others. AMD wanted it sold for around 240.00. Because of the high demand and low volume they jacked the price up because they could "get away with it" If you put it's cost where AMD wanted it to be the picture becomes a bit clearer. I keep getting the feeling that all anyone would want AMD to do is make a Intel clone to perform equal to intel's cpus. That is stupid and backward thinking. Innovation takes guts because there is more chaos attached to it but that is how we step forward into new and better things. Without it we start to just stagnate. I am more inclined to go with AMD because they are willing to innovate, to take a chance to make a cpu from a different angle. And while this can be troublesome as we now see and will have growing pains I'm going to keep up hope that when it's software issues are worked out and some of it's latency is also worked out with the next gen at least they are trying to make a better item instead of just being a Intel drone company. And like some more long range views have pointed out, while it's having issues in some area's it excels in some others like multitasking or running more than one or more programs at the same time. No more programs going crazy when something like Norton's decides to run a virus check in the back ground ect..ect.. If all people want is clones than buy only Intel if you want. But if you manage to talk everyone out there to just do this and AMD goes away don't come back here crying about how much your next computer will be costing you because it will be a LOT more expensive and new cpu's from Intel will shrink to a trickle as there is no longer a need for them to do so. This is a good reason alone to support AMD no matter what they make. The whole AMD only or Intel only is such a load of crap in the long run. Both work well with current software. The real issue is will AMD be able to survive and if not will they take ATI with them as well? If so the whole computing landscape will change to a very dark and nasty place. Intel and Nvidia would love it, the rest of the computing world would suffer total despair. It's easy to loose a business, it's next to impossible to start it back up again so if AMD keeps getting trashed by the media and people fall into that line the outcome will not be a pretty one. It's almost too obvious that the bulk of the people here weren't around or old enough to remember when it was just Intel and their 286 and 386 days. They cost a ton of money back then, computers were not cheap. It gave Apple and even Atari some business and AMD started to come alone as well and the cost slowly came down because they could survive in that kind of setting. But times have changed way to much for that to be repeated. It's too bad more people don't think along the what if line so they will know what to look forward to in a best or worse case scenario. It's one reason why I'll use a AMD cpu. I can run anything I want to just fine and do it at a reasonable price. And it's mixed with a little thank you for getting prices down to affordable levels. That is worth supporting. I keep getting the feeling that people think of these as "I have the fastest computer on the block, look at me", the keeping up with the neighbor scenario that is funny in the best of times and horrible in the worst of times. And I don't care where that takes them in the long run because I never want to think that far ahead. Heck, I seldom see any long range thinking at all on the internet and that is scary. And I always thought that was just a politicians disease. If you could really stop and think you'd know just how much better we are today because of AMD's willing to take chances and try something new, and without them we might still be running 32bit single core cpus. I'm not being a AMD fanatic, just a realist.
Amd first introduced a lot of the features that Intel used on Core architecture and Intel didn't mind copying them. So if even a company as big as Intel can copy, Why can't Amd. The cross license agreement between them was for this very same reason, Amd should take the best of every feature set that is used on x86 processors and make a good cpu with them, it seems to me that Amd might have too much pride for that, but speaking from a business stand point, it's the best thing they can do until they have gain more money.
The cross-license agreement is only about the use of the x86 instructionset and its extensions. Specific implementation details such as Intel's caching, HyperThreading, manufacturing process and whatnot don't fall under that license.
I was around in the 'Intel only' days as well... And I find it funny you say "It gave Apple and even Atari some business". Apple and Atari had been around in the home/personal computer market before IBM. It was the IBM PC that was trying to take business away from them, not the other way around.
I'm not so sure how much AMD contributed to getting cost down, if the cost went down at all... A high-end 486 CPU cost about the same when AMD released the Am386 (its first x86-competitor) as a high-end Core i7 costs today. I think the main differences are that the rest of the computer became a lot cheaper (monitors, PSUs, HDDs, motherboards, memory etc), and that these days, the mainstream and low-end ranges are often based on the same architecture, where in the old days the 'mainstream' alternative to a 486 would be a 386, and low-end would be 386SX or 286. So only high-end was state-of-the-art, if you were on a budget you bought older architectures at reduced prices. I don't think AMD had much of a share in either development.
My real gripe with Bulldozer is the fact that it competes with the 2500 but does not even have on die graphics. Less performance and product for your money = FAIL!
how come no one does reviews by manually setting the threads and memory per core, adobe after effects cs5.5 allows this, and my 8120fx renders a heavy themed project file pretty quick...
There are things that BD does well, and things it does badly (like most games). Its not good for many windows users, but actually not bad for many linux users. Its a pity we didnt get a chip that performed better that SandyBridge at a lower price, but that was too much to hope for.
Its sad that BD performs poorly at single threaded applications, AMD didnt quite get the mix right in this design and will hopefully improve it in subsequent versions of the chip. I like the fact they dont keep changing the cpu socket, while recently Intel have released 1155,1156,1366 and 2011 sockets !
For my current applications a cheap 990FX motherboard (which all seem to have working IOMMU support) and a cheap bulldozer, can do much the same job as a i7-3820. Its also a nightmare to find an Intel board for a 3820 chip that supports Vt-d properly on both the motherboard and bios.
So for things like Xen, AMD isnt a bad choice.
We are still lucky that AMD are competing with Intel. Competition and innovation benefits us all and helps keep prices reasonable. (With WD and Seagate, buying Samsung and Hitachi, competition in hard drives has all but disappeared. No wonder we pay so much for hard drives now !
I have been using AMD cpu's for years, bought an FX-6100, stuck it on my 890Fx motherboard, and i noticed great improvement in gaming. I don't know why people say it's 'useless' or other negative views but for me i can play Battlefield 3 in Ultra mode 1920x1080 and get between 43 - 48 fps which is fine, that's only on high action parts of the game, sometimes jumps upto 60fps absolute max. I only have a HD6950 2GB, not unlocked or overclocked and 8GB of memory on an MSI 890FX. Paid £114 all in for the CPU, i'm not complaining, if gaming was worse than my unlocked Phenom II 550 i would of sent it back. Hotfix did not make any difference that i can notice, my machine is for gaming. It might not be as great as the i7 but so what, as long as my machine does the job i need it to that is all that is important, all these benchmarks that are not doing the Bulldozer cpu's any favours have not put people off, we all can't afford intel.
I know the architectures are very different, but the entire scene with AMD's Bulldozer architecture reminds me so very much of the scene with the CELL processor when the PS3 launched. And, the aftermath. The CELL processor had so much going for it at the beginning. It had extremely strong computational potential and pre-launch, the computational powerhouse aspect of the PS3 was it's biggest argument. In terms of raw power, it was the king of the consoles. But look at what happened. CELL is effectively dead in anything that isn't a niche server market as far as I'm aware, or concerned. It didn't matter that the architecture had incredible potential, it required a radical change in the way software was developed (game developers for the PS3). And the game developers never really grabbed the bit by the teeth. Perhaps they've gotten better at utilizing the CELL architecture now, but that doesn't matter, as all of the rumor mill is pointing towards x86 processors for -all- of the new consoles. CELL isn't even in contention, and I highly doubt that any console company is going to try to force a radical architecture change on the market anytime soon.
My point with this is that we can't assume Microsoft will ever code properly for Bulldozer architecture, ever. Unless Bulldozer can sway over developers towards looking at it with future prospects in mind, it wouldn't surprise me to see the software industry modify there software to the bare minimum to work well enough with Bulldozer, then abandon it at the first chance. I know that comparing the processor market with the console market is like comparing apples to oranges, and even when saying the next generation of consoles are all going to have x86 processors, the market there has changed a lot in six years as well. I just want to bring up this comparison, and perhaps some other people who remember the pre-launch PS3 hype and the seemingly overwhelming advantage the console had in potential, and how that never panned out as hoped, can bring some new points to the Bulldozer conversation.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
79 Comments
Back to Article
Beenthere - Friday, January 27, 2012 - link
The Hot Fix is better than No Fix and Win 8 beta looks to be a few percent better than the Win 7 Hot Fix. So it's all good but nothing startling. Combined with Vishera/Piledriver should provide a nice performance bump however.Ramon Zarat - Friday, January 27, 2012 - link
"The Hot Fix is better than No Fix".No...
A fix than bring no additional value (less than 2%, well below statistical significant value), and at the same time, by its simple presence, introduce the potential of conflict and instability (simple law of entropy) can only be detrimental to a system, not positive.
Another way to look at it: If my car is broken to the point of being unusable and I apply 1500$ worth of parts and labor and in the end it make no difference in its usability, why spend the 1500.00$ in the first place? New parts are better than no new parts?
I love AMD as a company, I really do. The truth is Bulldozer in conjecture with present operating system is broken and dysfunctional with embarrassing sub-par performance and inacceptable power consumption per instruction for what is supposedly a 8 cores CPU that cannot even approach a 4 cores 2500K performance in 90% of scenarios. AFAIK, that applies to ALL operating system including Linux and Mac. I even have my doubt a perfectly Bulldozer tuned OS would be able to compete with Intel offering. I'd like nothing better to be proved wrong on that one.
AMD knew from day 1 (years ago) that their new unconventional architecture would face issue such as this one, but it’s only now, months after the actual launch of the product that they are working on fixes that fix nothing? Great example of bad management / strategy / planning and lack of foresight from AMD. I hope they have learned a lesson because underdog can’t afford such huge misstep too often and hope to stay alive in the long run.
bigboxes - Friday, January 27, 2012 - link
But can I download more torrents with this fix?ThomasS31 - Friday, January 27, 2012 - link
Why you have not tested core usage / clock gating and power consumption?I thouht the main problem of scheduling related to that as well...
Lugaidster - Friday, January 27, 2012 - link
This!Is power consumption altered with this hotfix?
Cheers
MrSpadge - Sunday, January 29, 2012 - link
Power consumption for given "mixed workloads" should be altered.. increased, actually, since more modules are active. Overall energy consumption, on the other hand.. I don't know. More power draw but for a shorter time.MrS
Marlin1975 - Friday, January 27, 2012 - link
Anybody notice the differance between the first hotfix and the 2nd?The first one even helped some Intel CPUs. The 2nd one seems to not help either AMD or Intel as much???
saneblane - Friday, January 27, 2012 - link
I fail to see what was fixed, i give them credit for adding 2 more fps in x264 second pass. But come on this is just stupid. their is no performance increase that is going to be noticed or seen by this so called hot fix. We already accept the fact that bulldozer failed on the desktop in it's first showing, why remind us again. Things like this make people lose confidence in you, just lay low and try to improve Piledriver.Lonyo - Friday, January 27, 2012 - link
Because this won't probably help future AMD CPUs?It's something Microsoft will need to deal with, and possibly already have with Win 8. It's free performance, and it helps pretty much test future similarly equipped AMD CPUs.
What are they supposed to be doing, other than giving free performance gains to Bulldozer users?
saneblane - Friday, January 27, 2012 - link
well if you think the hotfix is worth wild, then i guess AMD has done it's job. Their is always more people that are easily fooled than the ones who can see the truth. Free performance??? Give me a freaking break, i waited years for bulldozer, and your going to tell me about a cpu that cost more than an i5 2500 and losses in almost every benchmark and games.That this hotix is free perfomance, haha. Zambezi users are paying a lot for this so called "free" when even i5 2500 beat the crap out of it, i guess sandy bridge has a lot of "free" performance too.Beenthere - Friday, January 27, 2012 - link
The Win 7 Hot Fix speaks for iteslf. As noted it's a small bump - but it's free. It's not AMD's fault that Microsucks O/Ss sucks. It's reported Linux does a better job of scheduling, probably because it's used on a lot of servers with heavy work loads.I always tell people to buy what makes them happy. If you're happy with a product from a convicted criminal corporation and chose to support their efforts to eliminate consumer choice and drive up PC hardware prices - that's your choice and you're perfectly free to do so. Bashing AMD is not going to change reality however, no matter how disappointed you are in them.
In reality ANY of todays current model CPUs have more than enough computing power for 90+ percent of PC users. If all you do is run benchmarks then you could be misinformed...
http://www.theinquirer.net/inquirer/news/2120866/i...
gamerk2 - Friday, January 27, 2012 - link
Funny, considering different Linux distributions use different schedulers. Lets not also forget there is OVERHEAD to doing a lot of processing within the scheduler, and keeping track of thread/resource use can be a pain.sor - Saturday, January 28, 2012 - link
Huh? The process scheduler in Linux is dependent on which version of the kernel you have. Any current distribution should be using CFS. You may be confusing this with the options of IO schedulers.B3an - Friday, January 27, 2012 - link
Oh grow up you immature moron (typical Linux user!). And Apple are FAR worse now than MS ever was, as well as bigger. BTW it's not the 1990's anymore,frozentundra123456 - Saturday, January 28, 2012 - link
I cant believe people are still blaming Microsoft for bulldozers failure. It seems to me that the responsibility of a company is to bring out a product that works in the current environment, i.e. that works efficiently with win 7. Especially when you control a small portion of the market, you should make a product that "just works". You shouldnt expect the software to be rewritten for your product. And Intel doesnt seem to have any problem making processers that work efficiently with the current environment.Morg. - Tuesday, January 31, 2012 - link
I like the *could* be misinformed -- if Intel didn't want benchmarks and reviewers to like them, I'm pretty sure they wouldn't do anything for it ;)cigar3tte - Friday, January 27, 2012 - link
I don't normally bother, but there is way too many here...Worth wild = worthwhile
It's = its
Their = There
Your = you're
Losses = loses
And yeah, it's more of a free fix than free performance. Bulldozer users are getting back what was lost, rather than gaining something.
snouter - Friday, January 27, 2012 - link
"there are way too many"I don't normally bother either.
jonup - Saturday, January 28, 2012 - link
You can always look at a glass as half-full or half-empty.@typos: Sad part is that some of them are native speakers.
gevorg - Friday, January 27, 2012 - link
Can the Sandy Bridge CPUs benefit from this by any chance?wumpus - Friday, January 27, 2012 - link
I'd have to believe that any CPU with SMT enabled will benefit. That is, unless they already have this feature. Of course, Intel has been shipping SMT processors since P4. I'd like to believe that microsoft simply flipped whatever switch to treat bulldozer cores as SMT cores, but I don't have enough faith in microsoft's scheduling to believe they ever got it right.hansmuff - Friday, January 27, 2012 - link
At least Windows 7 (haven't tested anything else) schedules threads properly on Sandy Bridge. HT only comes into play once all 4 cores are loaded.tipoo - Friday, January 27, 2012 - link
Windows already has intelligent behaviour for Hyperthreading. I don't think this will change anything on the Intel side.silet1911 - Wednesday, February 1, 2012 - link
Yes, a website called Jagatreview have review a 2500+patch and there is a small performance increasehttp://www.jagatreview.com/2012/01/amd-fx-8120-vs-...
tk11 - Friday, January 27, 2012 - link
Even if if a scheduler did take the time to figure out when threads shared a significant number of recent memory accesses would that be enough information to determine that the thread would perform optimally on the same module as a related thread rather than an unused module?Also... Wouldn't running code that performed "intelligent core/module scheduling based on the memory addresses touched by a thread" negatively impact performance far more than any gains realized by scheduling threads on cores that are merely suspected to be more optimally suited to running each particular thread?
eastyy123 - Friday, January 27, 2012 - link
could some explain the whole module/core thing to me pleasei always assumed a core was basically like a whole processor shrunk onto a die is that basically right ?
and how does the amd modules differ ?
KonradK - Friday, January 27, 2012 - link
Long sory short:Bulldozer's module consist 2 integer cores and 1 floating point (FPU) core.
KonradK - Friday, January 27, 2012 - link
"Story" no "sory"I'm sorry...
Ammaross - Friday, January 27, 2012 - link
"Bulldozer's module consist 2 integer cores and 1 floating point (FPU) core."However, the 1 FPU core can be used as two single floating point cores or a single double double floating point core, so it depends on the floating point data running through it.
KonradK - Friday, January 27, 2012 - link
Not sure what you are supposing.Precision is the same, regardless of fact whether one or two threads are executed by FPU core. There are single or double precision FPU instructions, but aech thread can use any of them.
However if you mean single or double performance:
If two FPU threads will run on the same module each of them will have half of performance in comparision tothe same two FPU threads running on separate modules.
Just in first case one FPU is shared by two threads.
And it is whole point in the hotfixes - avoiding such situation as long this is possible.
KonradK - Friday, January 27, 2012 - link
"Ideally, threads with shared data sets would get scheduled on the same module, while threads that share no data would be scheduled on separate modules."I think that it is impossible for sheduler to predict how thread will behave and it is not practical to track the behaviour of running thread (tracking which areas of memory are accessed by threads would be so computational intensive as computational intensive is emulation).
So ultimately there is choice between "threads should be scheduled on separate modules if possible" or "do not care which cores belongs to the same module" (pre-hotfix behaviour).
Second means that Bulldozer will behave as PIV EE (2 core, 4 threads) on Windows2000, at least for threads that uses FPU heavily. Windows 2000 does not ditinguish between logical and physical cores.
Araemo - Friday, January 27, 2012 - link
I've noticed that windows doesn't always schedule jobs well to take advantage of Intel Turbo Boost.. I realize that it probably doesn't have a noticeable level of impact, but I do notice that running only 1 thread of high-cpu-utilization still doesn't often kick turbo above the 3/4 cores active frequency.. I can use processor affinity on the various common background tasks to pull them all to 1 or 2 cores to activate full turbo, but if a process is only using a percent or so of cpu resources, why schedule it to an otherwise-inactive core if there is an already-active, but 98% un-utilized core available? I think the power gating efficiencies would actually be more useful than the pure mhz-related turbo efficiencies (Running 2 cores 100Mhz faster is probably going to waste less power than you gain by shutting down the other two cores completely/partially).Is there anything to address that behavior?
taltamir - Friday, January 27, 2012 - link
Wouldn't those hotfixes improve performance on intel HT processors as well?tipoo - Friday, January 27, 2012 - link
No, Windows already leaves virtual threads from hyperthreading alone until all the physical cores are used, so this won't improve things on the Intel side any. This is specifically for Bulldozer and future architectures like this.Hale_ru - Monday, February 6, 2012 - link
Bullshit!Win7 had nothing to do with Intel HT until AMD hit them in the head!
I had so much asspain with Win7 shity CPU scheduler on FEM and FDTD simulations.
8HT-core setup just reduces overall performance up to 50%(a HALF!!!) comparing to NO-HT setup.
Simpel task manager checkup showed that Win7 just was putting low-threaded processes on the same core without an option. Just simplest increment scheduler they have for Intels.
Hale_ru - Monday, February 6, 2012 - link
So, it is recommended to use AMD optimization patches (only the core-addresing one, not the C6 state patch) on any Win7 machine using simple multithreaded mathematics.hescominsoon - Friday, January 27, 2012 - link
Shared cpu modules that have to compete for resources? Reminds me of HT v1. IMO this is basically a quad core chip with the other 4 threads available in the primary core isn't being used all the way. I've looked at the design and it's just nonsensical. This is not a futuristic bet but a desperate attempt at differentiation...with most likely disastrous results. AMD has now painted themselves into a niche product instead of a high performance general purpose cpu.dgingeri - Friday, January 27, 2012 - link
I like the design of the Bulldozer overall, but there is obviously a bottleneck that is causing problems with this chip. I'm thinking the bottleneck is likely the decoder. it can only handle 4 instructions per clock cycle, and feeds 2 full Int cores and the FP unit shared between the two cores. I bet increasing the decoder capacity would show a really big increase in speed. What do you think?bji - Saturday, January 28, 2012 - link
I think that if it was something easy, AMD would have done it already or are in the process of doing it.I also think that it's unlikely that it's something as simple as improving the decoder throughput, because one would think that AMD would have tried that pretty early on when evaluations of the chip showed that the decoder was limiting performance.
These chips are incredibly complicated and all of the parts are incredibly interrelated. The last 25% of IPC is incredibly hard to achieve.
bobbozzo - Friday, January 27, 2012 - link
The hotfixes also support Windows Server 2008 R2 Service Pack 1 (SP1)Ammaross - Friday, January 27, 2012 - link
The FX-series chips have been shown to be RAM-dependant as well. Strangling these things with 1600MHz RAM when AMD themselves recommend a minimum of 1866MHz for these chips, but recommend 2166MHz RAM.Anand, perhaps you should show charts that demonstrate whether you're able to repeat these claimed results?
tipoo - Friday, January 27, 2012 - link
I didn't know they recommended 2166. Separate article on Bulldozer memory scaling, perhaps? ;)CUEngineer - Friday, January 27, 2012 - link
I hate how press and AMD try to play off that a bulldozer module is like a dual core or similar to SMT... IT IS NOT.Its simple, there is a single Front end in the module to fetch instructions, this defines the module to be a single "core".... adding integer pipes and making the pipe wider or adding more execution units is simply super scaler architecture they are trying to make wider...
Id like to see their performance analysis because i the ROI is gone when you make your core too wide and thats what they did and they get tremedously underutilized and waste power... GG AMD.
silverblue - Saturday, January 28, 2012 - link
I thought the defining property of a "core" was its integer execution hardware?KonradK - Friday, January 27, 2012 - link
"You can either do two 128-bit operations at once, or one 256-bit operation at once, on a single module. "This means also that two 128bit operations from one thread may be executed at once (as long second does not depend on result of first).
So ultimately benefits of Bulldozer's architecture are no more and no less than benefits than comes from any Hyper-Threading.
Two threads may run on one core at its full speed as long as any of them does not utilize execution units in more than 50% (that is true for not well optimized, and some other specific, programs).
They will not run faster than the same threads running on separate cores.
descendency - Friday, January 27, 2012 - link
The problem here is that the scheduling isn't the problem.It's the single core (per clock) performance. This thing is more expensive than a 2500k and yet has to compete with it.
In all honesty, if you are looking for a gaming PC that is still decent in other apps, I'd recommend a 1050T and spend the savings on a better GPU. With an AM3+ motherboard, you can wait for AMD to fix the hardware.
saneblane - Friday, January 27, 2012 - link
I don't need it, I run i5 2500. It's kind of amazing that people are still confused about whether bulldozer on the desktop failed. If you are one of those people who think that Bulldozer is worth the money, then you deserve to buy it. Denying the truth just makes all of you look stupid. Bulldozer only manage to bulldoze itself, sales figures doesn't matter to me, because I am not AMD nor do I have stock invested in them, so sales figures is irrelevant to most people except AMD, and stock holders. It's because of people like you that kiss AMD's ass for all of the there highend failures, that even to this day Amd doesn't see the need to improve themselves because people would buy whatever they make. What a bunch of fools you all are.Flunk - Saturday, January 28, 2012 - link
I think his point is more that while this does improve bulldozer performance a little bit, it doesn't do anything to make it anywhere near competitive with Intel's offerings. Therefore the product is fairly useless.bji - Saturday, January 28, 2012 - link
As always, people neglect price when they describe product merits. Bulldozer is not a "fairly useless" product because it competes well with Intel CPUs in its price range. No, it does not compete with high end Intel CPUs, but those are not in its price range. AMD has no choice but to price the part so that its value proposition is comparable to Intel parts of the same performance, otherwise the part would not sell. This may make AMD less money but for the consumer, it is a fine CPU in its price range.Ratman6161 - Saturday, January 28, 2012 - link
Just checked NewEgg and an 8150 is $40 more expensive than a 2500K and $30 less than a 2600K. -so if falls right in between on price.Looking at the benchmarks, if gaming is not your primary concern then on a performance basis, the 8150 competes roughly with the 2500K. If gaming is your primary concern then the 2500K is the clear winner on performance.
So best case the 2500K and 8150 are evenly matched and worst case the 8150 gets trounced. So it has no business being priced $40 higher. To me, to be competitive it needs to be the same to lower price than a 2500K. And if you are someone willing to spend $40 more, you might as will just go $70 more for the 2600K.
So I just can't see how anyone can say it compares favorably with the Intel products in its price range.
Ratman6161 - Saturday, January 28, 2012 - link
With an Intel i3 2100. I suspect the gamers might actually be better served with the i3 than AMD's 8150 other than the inability to overclock the i3. Think of how much better a GPU you could get by saving about $150 by going with an i3 over an 8150.Just sayin...
medi01 - Sunday, January 29, 2012 - link
Did you forget something important?Taking into account, cough, motherboard prices, cough?
The "you can get much better GPU for XX bucks" stands for pretty much any non low end CPU.
xeridea - Monday, January 30, 2012 - link
AMD did a blind test with the 2700k and 8150. 28% chose 2700k, 51% chose 8150, 20% undecided. It is a possibly an anomaly that the 8150 got more votes, but it at least shows that it is competitive as a gaming CPU, even compared to the 2700k which costs $100 more.IIRC the 8150 is competitive and sometimes beats the 2500k and even beats the 2600k in some highly threaded situations.
Your figures are wrong. Intel has the 2600k for $325 and the 8150 for $270. This means the 8150 is $65 less than the 2600k, not $30. It is $40 more than the 2500k but you have to consider generally higher cost of motherboard/memory with Intel, and guaranteed socket incompatibility with every CPU.
arjuna1 - Saturday, January 28, 2012 - link
Yeah, Imagine how pathetic can a life be when you need to sustain your ego by hating the competing brand of your PC's cpu.Fox5 - Saturday, January 28, 2012 - link
Wasn't the point of the hotfix to consolidate threads onto modules, so that modules could be gated and turbo core enabled? Isn't that where the performance boost is supposed to come from?KonradK - Sunday, January 29, 2012 - link
Windows 7 sheduler since beginning tries to group many threads on single core. It is mainly for maximize effect of power gates. I'm not sure whether core must be completely idle for turbo core/turbo boost to be enabled.Purpose of hotfix was to make sheduler distingushing between cores beloging to the same, or separate module.
Mech-Akuma - Monday, January 30, 2012 - link
This is what I thought as well. I remember reading somewhere when Bulldozer launched that Win7 prefers to primarily direct all threads to cores inside different modules. This would cause modules not to be able to enter C6 sleep and therefore the performance improvement of Turbo Core would be drastically cut. However since each module does not need to share resources (while each module is only using 1 core) performance is picked up there.However this contradicts what this article says about pre-hotfix scheduling. Could someone clarify how pre-hotfix scheduling worked?
silverblue - Saturday, January 28, 2012 - link
Toms did the same thing. However, I'm not sure it'd make much difference.richaron - Monday, January 30, 2012 - link
As far as I'm aware 1600 is the most developed ram at the moment. I consider anything faster as simply overclocked 1600; more "bandwidth", more voltage, but lower cas latency.I'm not sure what the reviewers were thinking, but they probly know more than most of us...
KonradK - Saturday, January 28, 2012 - link
"No. Bulldozer adds silicon that actually executes instructions. [...]"Whole idea of Hyperthreading is to let a second thread utilize resources of core wasted by first thread (and vice-versa).
Depending on fact how much the execution units are replicated and how (un)optimally written is code amount of wastes (and benefits from Hyperthreiding) will be lower or greater.
Maybe the execution units of Bulldozer's FPU are replicated to the such excent that it will be wasted in most cases unless used by two threads simultaneusly.
But performance of two (FPU intensive) threads running on the same module will never be equal to the performance of two threads running on separate modules.*
Otherwise hotfixes to sheduler would be useless.
*) Assuming that CPU is not fully loaded i.e. some cores remains idle.
Conficio - Saturday, January 28, 2012 - link
This kind of problem, more intelligent schedulers for a new architecture cries for open source to be the experiment and proofing ground.So I'd like to know what the scheduling behavior of Linux, BSD (and in extension Mac OS X, once Apple does use the architecture) is? Has AMD any experience? Do they work with any Universities to find optimal algorithms for this new architecture?
If such new architectures where the "core" concept blurs will be more common in the future, there is sure some research that can shed some light on this topic. Does any body know?
just4U - Monday, January 30, 2012 - link
I had to figure out what to do for our secondary system. Eventually I decided on the FX6100 which I picked up for $145... so a little bit more then a midrange I3. The board I got was a Asus M5A EVO for $110. That system is pretty fast... and I think overall a little better then what I'd have gotten out of an I3. I am on a i5 2500K everyday.. and before that a i7 920 setup..These new FX proccessors are not what some reviewers make them out to be. Their actually pretty good for their price range. Are they going to win any awards? Likely not.. but for most of us your not going to be pulling your teeth and whining about it being slow because their not.
Mugur - Monday, January 30, 2012 - link
I wonder if the fact that the memory controller is running at 2-2.2 Ghz instead of the full cpu speed on Intel (the uncore part) and the cache latency is higher on AMD maked also FX cpus not competitive in single threaded tasks?Regarding the memory speed, I remember that high speed DDR 3 is required for the AMD APU line, not the FX line...
I recently changed my gaming machine from a Phenom II X2 BE 3.3 Ghz (ran at 3.8 Ghz) to a Core i3 2120 and although some tasks in Windows seems a little slower (like browsing with a lot of pages opened - I have no ideea why - maybe the amount of cache PII had compared with the i3?), the gaming (Battlefield 3) improved a lot.
I wanted to go FX route, but simply looking on some gaming benchmarks made me go Intel (and the fact that I found a cheap Z68 Gigabyte board, people are not taking into account that sometimes a good Z68 board is twice the price of a good AMD board). The other components were a 60 GB SSD / 500 GB HDD, 8 GB DDR1600 and a Radeon 6870.
What I wanted to point out is that AMD does not compete through price properly with the FX line. A Core i3 2120 is about the same price like an FX 4100 and a Core i5 2500k has a lower price than an FX 8150. Only a high end Z68 board is (much) more expensive than an 8xx/9xx AMD AM3+ board...
Scali - Monday, January 30, 2012 - link
We've seen linux kernel patches for Bulldozer that have about as much effect:http://openbenchmarking.org/result/1110200-AR-BULL...
Let's just blame AMD, shall we?
richaron - Monday, January 30, 2012 - link
I've seen linux benchmarks, before any patch, which show the 8150 perform much better compared to the 2500k. Check the Phorinox benchmarks (unfortunately there's no 2600k in them).Let's just blame Microsoft, shall we?
Scali - Tuesday, January 31, 2012 - link
Well no.The point here is that kernel patches don't really have much of an effect on Bulldozer performance. There's no 'magic bullet'.
If the 8150 performs better compared to the 2500k in linux, that is a different story.
How does performance in linux compare to Windows?
It could also be that linux is just slower on the 2500k than Windows is (which I'm quite sure to be the case).
So no, I'll just blame AMD. The Bulldozer module architecture just doesn't work.
superccs - Monday, January 30, 2012 - link
How come this was not developed in association with windows BEFORE release....???? That's like releasing a new GPU without having a firmware that runs the new feature set.I sure hope the recent house cleaning at AMD got rid of some of these upper level jack@sses responsible.
wingless - Monday, January 30, 2012 - link
The 4100 and 6100 look like they could gain a bit from this patch as well. Will there be a test with them included?just4U - Monday, January 30, 2012 - link
I don't see either of them on bench results. Did Anand even review them? I don't think they did.Trailmixxx - Monday, January 30, 2012 - link
Has droppend from over by a factor of 5 on my system after applying the hotfixes. Can someone else confirm this? And possibly on an Intel system as well?b_wallach - Monday, January 30, 2012 - link
One problem this cpu has had from day one has been it's overpriced by newegg and others. AMD wanted it sold for around 240.00. Because of the high demand and low volume they jacked the price up because they could "get away with it"If you put it's cost where AMD wanted it to be the picture becomes a bit clearer.
I keep getting the feeling that all anyone would want AMD to do is make a Intel clone to perform equal to intel's cpus. That is stupid and backward thinking. Innovation takes guts because there is more chaos attached to it but that is how we step forward into new and better things. Without it we start to just stagnate.
I am more inclined to go with AMD because they are willing to innovate, to take a chance to make a cpu from a different angle.
And while this can be troublesome as we now see and will have growing pains I'm going to keep up hope that when it's software issues are worked out and some of it's latency is also worked out with the next gen at least they are trying to make a better item instead of just being a Intel drone company.
And like some more long range views have pointed out, while it's having issues in some area's it excels in some others like multitasking or running more than one or more programs at the same time. No more programs going crazy when something like Norton's decides to run a virus check in the back ground ect..ect..
If all people want is clones than buy only Intel if you want.
But if you manage to talk everyone out there to just do this and AMD goes away don't come back here crying about how much your next computer will be costing you because it will be a LOT more expensive and new cpu's from Intel will shrink to a trickle as there is no longer a need for them to do so.
This is a good reason alone to support AMD no matter what they make.
The whole AMD only or Intel only is such a load of crap in the long run. Both work well with current software. The real issue is will AMD be able to survive and if not will they take ATI with them as well?
If so the whole computing landscape will change to a very dark and nasty place. Intel and Nvidia would love it, the rest of the computing world would suffer total despair. It's easy to loose a business, it's next to impossible to start it back up again so if AMD keeps getting trashed by the media and people fall into that line the outcome will not be a pretty one.
It's almost too obvious that the bulk of the people here weren't around or old enough to remember when it was just Intel and their 286 and 386 days. They cost a ton of money back then, computers were not cheap. It gave Apple and even Atari some business and AMD started to come alone as well and the cost slowly came down because they could survive in that kind of setting. But times have changed way to much for that to be repeated.
It's too bad more people don't think along the what if line so they will know what to look forward to in a best or worse case scenario.
It's one reason why I'll use a AMD cpu. I can run anything I want to just fine and do it at a reasonable price. And it's mixed with a little thank you for getting prices down to affordable levels.
That is worth supporting.
I keep getting the feeling that people think of these as "I have the fastest computer on the block, look at me", the keeping up with the neighbor scenario that is funny in the best of times and horrible in the worst of times.
And I don't care where that takes them in the long run because I never want to think that far ahead.
Heck, I seldom see any long range thinking at all on the internet and that is scary. And I always thought that was just a politicians disease.
If you could really stop and think you'd know just how much better we are today because of AMD's willing to take chances and try something new, and without them we might still be running 32bit single core cpus.
I'm not being a AMD fanatic, just a realist.
saneblane - Monday, January 30, 2012 - link
Amd first introduced a lot of the features that Intel used on Core architecture and Intel didn't mind copying them. So if even a company as big as Intel can copy, Why can't Amd. The cross license agreement between them was for this very same reason, Amd should take the best of every feature set that is used on x86 processors and make a good cpu with them, it seems to me that Amd might have too much pride for that, but speaking from a business stand point, it's the best thing they can do until they have gain more money.Scali - Wednesday, February 1, 2012 - link
The cross-license agreement is only about the use of the x86 instructionset and its extensions.Specific implementation details such as Intel's caching, HyperThreading, manufacturing process and whatnot don't fall under that license.
Scali - Friday, February 3, 2012 - link
I was around in the 'Intel only' days as well...And I find it funny you say "It gave Apple and even Atari some business".
Apple and Atari had been around in the home/personal computer market before IBM. It was the IBM PC that was trying to take business away from them, not the other way around.
I'm not so sure how much AMD contributed to getting cost down, if the cost went down at all...
A high-end 486 CPU cost about the same when AMD released the Am386 (its first x86-competitor) as a high-end Core i7 costs today.
I think the main differences are that the rest of the computer became a lot cheaper (monitors, PSUs, HDDs, motherboards, memory etc), and that these days, the mainstream and low-end ranges are often based on the same architecture, where in the old days the 'mainstream' alternative to a 486 would be a 386, and low-end would be 386SX or 286. So only high-end was state-of-the-art, if you were on a budget you bought older architectures at reduced prices.
I don't think AMD had much of a share in either development.
Veroxious - Tuesday, January 31, 2012 - link
My real gripe with Bulldozer is the fact that it competes with the 2500 but does not even have on die graphics. Less performance and product for your money = FAIL!matt0609 - Wednesday, February 1, 2012 - link
how come no one does reviews by manually setting the threads and memory per core, adobe after effects cs5.5 allows this, and my 8120fx renders a heavy themed project file pretty quick...markmpx - Friday, March 16, 2012 - link
There are things that BD does well, and things it does badly (like most games). Its not good for manywindows users, but actually not bad for many linux users. Its a pity we didnt get a chip that performed
better that SandyBridge at a lower price, but that was too much to hope for.
Some real applications it does ok at :
http://www.overclock.net/t/1141562/practical-bulld...
Its sad that BD performs poorly at single threaded applications, AMD didnt quite get the mix right in this
design and will hopefully improve it in subsequent versions of the chip. I like the fact they dont keep changing the cpu socket, while recently Intel have released 1155,1156,1366 and 2011 sockets !
For my current applications a cheap 990FX motherboard (which all seem to have working IOMMU support)
and a cheap bulldozer, can do much the same job as a i7-3820. Its also a nightmare to find an Intel board for a 3820 chip that supports Vt-d properly on both the motherboard and bios.
So for things like Xen, AMD isnt a bad choice.
We are still lucky that AMD are competing with Intel. Competition and innovation benefits us all and helps
keep prices reasonable. (With WD and Seagate, buying Samsung and Hitachi, competition in hard drives has all but disappeared. No wonder we pay so much for hard drives now !
smith1968 - Friday, March 16, 2012 - link
I have been using AMD cpu's for years, bought an FX-6100, stuck it on my 890Fx motherboard, and i noticed great improvement in gaming. I don't know why people say it's 'useless' or other negative views but for me i can play Battlefield 3 in Ultra mode 1920x1080 and get between 43 - 48 fps which is fine, that's only on high action parts of the game, sometimes jumps upto 60fps absolute max. I only have a HD6950 2GB, not unlocked or overclocked and 8GB of memory on an MSI 890FX. Paid £114 all in for the CPU, i'm not complaining, if gaming was worse than my unlocked Phenom II 550 i would of sent it back.Hotfix did not make any difference that i can notice, my machine is for gaming. It might not be as great as the i7 but so what, as long as my machine does the job i need it to that is all that is important, all these benchmarks that are not doing the Bulldozer cpu's any favours have not put people off, we all can't afford intel.
garadante - Friday, April 6, 2012 - link
I know the architectures are very different, but the entire scene with AMD's Bulldozer architecture reminds me so very much of the scene with the CELL processor when the PS3 launched. And, the aftermath. The CELL processor had so much going for it at the beginning. It had extremely strong computational potential and pre-launch, the computational powerhouse aspect of the PS3 was it's biggest argument. In terms of raw power, it was the king of the consoles. But look at what happened. CELL is effectively dead in anything that isn't a niche server market as far as I'm aware, or concerned. It didn't matter that the architecture had incredible potential, it required a radical change in the way software was developed (game developers for the PS3). And the game developers never really grabbed the bit by the teeth. Perhaps they've gotten better at utilizing the CELL architecture now, but that doesn't matter, as all of the rumor mill is pointing towards x86 processors for -all- of the new consoles. CELL isn't even in contention, and I highly doubt that any console company is going to try to force a radical architecture change on the market anytime soon.My point with this is that we can't assume Microsoft will ever code properly for Bulldozer architecture, ever. Unless Bulldozer can sway over developers towards looking at it with future prospects in mind, it wouldn't surprise me to see the software industry modify there software to the bare minimum to work well enough with Bulldozer, then abandon it at the first chance. I know that comparing the processor market with the console market is like comparing apples to oranges, and even when saying the next generation of consoles are all going to have x86 processors, the market there has changed a lot in six years as well. I just want to bring up this comparison, and perhaps some other people who remember the pre-launch PS3 hype and the seemingly overwhelming advantage the console had in potential, and how that never panned out as hoped, can bring some new points to the Bulldozer conversation.