nm refers to nano-meter and 65nm is a technology node. So, the processor manufacturing has been rolling down. For example, 250nm to 130nm to 90nm to 65nm to 55nm etc...
Chandu naanna, if you take a look at the context, i.e. the sentences before and after the "wrong" sentence, you should be able to understand why it should be "W" instead of "nm".
"High end desktop CPUs now spend their days bumping up against 125 - 140W limits. While mainstream CPUs are down at 65W. Mobile CPUs are generally below 35W. These TDP limits become a problem as you scale up clock speed or core count."
As usual Anand, the article was great and really helped with figuring out where I am taking my wife's next computer. I was tempted to go AMD this time around for her next PC, but after reading this I believe I will stick with an Intel solution. Thanks man!
You would have to be insane to pay $1000 for a chip that may be good for gaming. at $199 with slightly lower performance its a no brainer. When I build a system, I don't care if the frame rates etc is 10 to 15% better. Who cares ; the chip is fast and I have not problems playing high end games. I have no special setup and it does everything that my friends I7 can do. Good for me I get more pc for the buck . Go ahead and go broke buying just a motherboard and cpu when I can get a modern motherboard a cpu, 6gigs of ddr3 1600, a 1tb hd and a dvdrw. More for me.
i compared intel and AMD from almost AMD start ( K6 II ) CPU , i believe that AMD did not change their Policy much : they still offer more for less , with one different , in that time the market was also different from today , now a days , you can easly found all the hardware you need and it's compatible and works with AMD , it's accurate Say : it's not about absolute Perf , it's about pref at a given price and in this , i think AMD wins .
That's pretty disappointing scaling from the 965 to the 1055T.. I'd be willing to bet the lack of added cache (not to mention the memory controller issues the article raises) are really holding back the performance. Unfortunately, 6 cores at 45nm hasn't left them with much of a choice as far as that goes.
A bit underwhelming since before I read this bench, I saw overclock3d benches (they had 1090T at stock absolutely thrashing an i7 at 4GHz). Seems like the ideas that most of the people had with threading were true though. Overall I think they should be decently competitive because they do hold the advantage in cores.
It's nice to see AMD pushing Intel a bit, though this does not make me regret my earlier purchase of the i7 860 as much as I would have liked it to.
Is there some kind of error with TurboCore? It doesn't look like it's working because those Dragon Age (optimized for quads?) and Dawn of War FPSs were a bit disappointing! :(
Well as always for the last couple of years. This can't go on forever. Just if you think about the margins that intel has and the one that AMD has (if any...). The Problem I see is that in the cheaper price regions (=mainstream) users do not need a ton of cores. In the end most could easly live with 1 core even though 2 would probably lead to a "feelable" difference.
So I'm not sure if it's very intelligent to invest money in these designs. Just look at the die shot. I'm not an expert but you don't need to be to see that the just attached 2 more cores to an existing design probably having to make a ton of trade-off's (like the l3 cache).
I mean it is theoretically great. The easy upgrade path but again, how many users actually ever upgrade? IMHO a tiny fraction. Of course thise i3/i5 dual cores from intel are overpriced. But that's what most company PC's will use. Huge profits for intel. I hope at least AMD's answer to atom will be very good (=alot faster, same power consumption). But I doubt it...
The entry level crowd isn't the problem. Someone who only requires basic performance should buy the cheapest CPU, and AMD has that slot. For the performance crowd, Intel is the clear winner, but it's not the majority of the market. The real issue here is the price performance crowd, the people who want good performance for a good price. This is a pretty large market and AMD is trying to win here by reducing the price, but it's not doing great by this review.
I think AMD still makes money on all CPU's, just not nearly as much as Intel.
"Someone who only requires basic performance should buy the cheapest CPU, and AMD has that slot."
I think that's his point though? This CPU is not the 'cheapest'. Why invest in a design like this? As Anand says in the conclusion, for 'mixed' usage, the i860 is a better choice. Only people who need heavily threaded performance may want this CPU. Especially at the pice at which AMD is selling these six-cores, can they really get a good return-on-investment on the development of this CPU? Seems like dualcores and quadcores is where the bulk of the sales will be.
Your argument is short-sighted. You can't stay with 4 core models indefinitely and expect to remain competitive in the future. R&D money has to be spent on the new technology even if your niche is the Price/performance budget sector.
I think it's short-sighted to just look at the number of cores. What AMD REALLY needs is a core architecture with a good IPC and relatively cheap to produce. That is what made the K7 and K8 so successful. Currently AMD is just throwing more transistors at the problem, which is not a good thing, given their position (45 nm process vs Intel's 32 nm).
If anything Intel proves that it's not the number of cores, since Intel's quadcores generally outperform AMD's six-core. So as it stands today, Intel can stay with quadcores just fine.
exactly my thought. But I must admit I did not think about servers at all. If it was easy to adopt a server cpu which had to be made anyway then it's a different story. (is it a half-magny-cours?) But generally i would say a complete new architecture is needed soon to stay competitive. Phenom architecture is quite old now and never was that good anyway especially compared to core architecture...
I think the same argument goes for the server market... AMD cannot compete with Intel on performance, so they have to compete on price, meaning low profits. AMD *could* compete with Intel in the server market, but that was before Nehalem. Now Intel has some very strong offerings in the server market... So it's the same story either way... should AMD really try to build such large high-end CPUs, when they know they won't be able to compete on performance anyway, and performing on price is dangerous when your competitor is always a manufacturing node ahead.
AMD *could* have had the Atom market, but they actually killed off their Geode line of CPUs just before Atom was introduced and pretty much owned the netbook market.
Everyone keeps mentioning the fact that AMD needs a new architecture and that they cant remain competitive with the current yet aging Phenom core, and AMD is wasting time and money by just throwing more cores at the problem.
I think you guys are missing the bigger picture, the Phenom II x6 is meant solely as a stop gap, an effort to remain somewhat competitive and keep revenue flowing while they finish baking Bobcat and Bulldozer, due out next year. AMD knows that the Phenom II its a show stopper, or even a headlining act, thats why these CPU's were released without any major fanfare, they arent trying to fool anyone into thinking these chips are something they're not, they're simply trying to hold down the fort.
I think that once complete, the Bobcat / Bulldozer line will be what puts AMD back on the competitive front. just simply looking at the code names they chose, the last time AMD got bold and used suggestively strong code names, tack hammer / claw hammer / sledge hammer, the end products, Turion 64 / Athlon 64 / Opteron lived up to the expectations, and in fact they proved to be game changers. I believe the same will hold true with Bobcat / Bulldozer.
Another point i'd like to make is, over the past 10 years AMD has matured quite a bit, growing up from being a completely non-threatening low end sub-generic CPU builder, with seemingly no chance of ever amounting to much more than that in a tightly controlled Intel world. From those dark days AMD has grown to become a global player in the market and a competitive threat to intel. they've come along way from the AMD of the 90's. Most people think that the whole Barcelona / Phenom fiasco, with the TLB erratum and delayed launch cycles has spelled the end of a competitive AMD. I think that whole fiasco is exactly what AMD needed, people learn the most from their failures, not from their victories, just look at intel, they had all but stagnated all thru the 90's and the first half of the 2000's. It wasnt until AMD came along and kicked them right square in their out of shape and boated ass and then out ran them for a few years, that they go back in shape so to speak and started making competitive and compelling products again. So only time will tell if AMD has what it takes to be a competitor, or just a fluke...
As for this review, why did you use such old drivers for the AMD chipset and graphics card? i mean seriously, the chipset drivers you used were developed a couple years before the 890FX chipset you were using was released? how about using 10.3 over 8.11 and 9.12? or is there something special about the 8.11 and 9.12 drivers? given the gap between your versions and the current, i'd says your somewhat kneecapping the tests..
"I think you guys are missing the bigger picture, the Phenom II x6 is meant solely as a stop gap, an effort to remain somewhat competitive and keep revenue flowing while they finish baking Bobcat and Bulldozer, due out next year. "
I don't think we're missing the bigger picture, we're pointing out exactly the same: X6 is a stop-gap, what AMD *really* needs to become competitive again, is Bobcat/Bulldozer. We're just saying that they need it NOW, rather than next year (and they have to be a success aswell, not the fiasco that Phenom was... a new architecture is no guarantee for success obviously). AMD has been struggling ever since the Core was introduced in 2007(!). They REALLY need a good, new architecture now, it is long overdue.
I think you are missing a lot. What the hell are you talking about? Intel is throwing more transistors at the problem, not AMD. Compare the cores, Intel needs >50% more transistors per core. They need 8 threads for Nehalem to get full performance, AMD only 6 for Thuban. Intel is just brute-force. Already today AMD has a smarter and more effective design. And the really interesting designs (Bulldozer, Llano, Bobcat) are not even launched. The only problem for AMD is software related, not hardware. The widely used compilers under Windows, MSC and ICC, optimize much more for Intel. And your knowledge about servers also seems to be outdated. Compare Magny-Cours and Gulftown. The Opteron is more powerful and more power-efficient. And it is still 45 nm. You really want to claim AMD isn't executing well? Then you must be joking or an Intel flamer. X6 and i7 are not for people that utilize one or two threads most of the time. Thuban is not meant to be a mainstream CPU. Therefore, Deneb, Propus and Regor are there. Thuban is for highly threaded workloads. And then an X6 1090T ($295) can outperform an i7 860 ($284) and also an i7 870 ($562). If you ask me, the X6 1090T offers very good value for such scenarios. The X4 1055T ($199) offers even more value.
Intel doesn't "need" more transistors, they are just in the position that they can implement a lot more cache than AMD, without getting into latency problems or getting too much power consumption. This 6-core 125W TDP processor has problems keeping up with Core i7 8xx series, which are only 95W TDP. So how is AMD more energy-efficient? Only if you compare it unfavourably.
if you wanna pay for intel advertisment ya pay more then.
it like people pay for HP or Dell a rip off
when we all no the asus or the msi in the corner that nobody talks about is so much better.
us ur head
apple ipad the over sized ipod touch that people love to buy only because of advertisment yet you can buy a windows touch tablet at harf the price.
the intel chip has had bad times too the have sold bad chips with cores that dont work and so had amd they are both the same.
but if you wanna get sucked into intels rip offs then thats ur own fult
it will be funny when i have the amd 8 core and lock 4 of the core and then ill compare it to any intel chip by the time intel make anythink better then that people will not be paying $3000 for the same thing at $400
you talk about intel quadcore being better then AMD 6 core you really need to look at locking and unlocking locking 2 of the 6 AMDs cores it will crap all over the intel
I would guess the hex-core is a lot more useful in servers, and there is a consumer version of the chip since the bulk of the development is expected to be funded from server sales.
Most office and browser applications run quite fast enough, even on slower chips. A 67% performance difference will frequently amount to only a few microseconds, or less, in execution time.
As for gaming, casual and moderate gamers will be happy as long as they hit 30 frames per second at 1080p at moderate detail, which mid-range dual cores can handle in most games.
So the big differentiator is multi-media: very intensive applications where speed or extra cores, depending on the app, can significantly reduce runtime. For instance, the hypothetical 67% performance improvement mentioned above can change a 90 minute encoding job into a 54 minute job. Or result in lower times while using Photoshop. Or let you run a scheduled anti-virus scan in the background without impacting your gaming or video performance.
For many people, possibly most, more cores (or more threads) is more useful than clock speed, because they'll only *notice* the difference when running processor-intensive, heavily-threaded apps, or when heavily multi-tasking. And that is becoming increasingly more common as multi-media becomes a primary home use for PC's
If that's not a factor in their typical usage, then users are probably better off with an i3, or - and here comes the extra core advantage again - an Athlon II X3, which performs similarly to the i3, but for $40-$50 less.
As for profit margins, AMD just posted one of its best quarters ever, so the "more cores for the dollar" strategy, which provides more processing power where users are most likely to actually notice it, seems to be working to some extent. No, AMD won't overtake Intel, but it can be a profitable mainstream niche - and Intel is not likely to challenge them too seriously, for the time being, given its current problems with anti-trust in the US and Europe.
Do AMD's numbers for the quarter include graphics sales? As that should have been very profitable for them, being mostly the only game in town and able to sell above MSRP.
Yes, absolutely. I wished more GCC builds would be benchmarked. Intel's architecture is not as good as many people believe. Most of Intel's advantages come from better software support.
Intel 6-core early adopters will feel like they were ripped-off. I have the Asus USB 3.0 (for review) for a few weeks now and I'm waiting for this processor to test it with - thanks for the review, I'm sure it'll help me a lot.
Well, in my opinion the difference in performance versus the price doesn't justify it. They look nice in a bar graph and all like the video encoding performance but in reality, its just a few seconds.
It's all "just a few seconds". I'm going to wait for the Intel's consumer-priced hex-cores before I do anything. Right now, AMD needs 50% more cores to even match Intel's parts in heavily threaded code. Running out right now and buying all new kit might be leaving you feeling like "cores on the ground" if Intel comes out with the consumer-priced stuff. ;)
Price/performance has always been on an exponential scale. AMD was no different when their Athlon FX were the fastest CPUs around. Intel doesn't call them Extreme Edition for nothing. I just get tired of people who go around on the internet telling everyone that Intel only has $1000 CPUs, and therefore Intel is overpriced. The fastest PC on the market is just $1000, has been like that for decades, regardless of whether it was an AMD or Intel. Just seems to be how the market works.
I just get tired of people who go around on the internet telling everyone that Intel only has $1000 CPUs, and therefore Intel is overpriced.
Sorry man, but this isn't what I'm implying which is why there is a "in my opinion" on my explanation. It was really just a personal opinion, nothing else.
I think people buying an Extreme Edition CPU know exactly what they're getting themselves into. Those CPUs are never good price/performance, you pay a premium to get the absolute fastest CPU on the market, that's the whole point of the Extreme Edition concept.
Obviously Intel isn't going to offer only one expensive 32nm six-core forever. Perhaps this X6 CPU will trigger Intel to release more 'mainstream' six-cores and other 32nm CPUs.
For home use, yes. For professional use, that 48.8 frames/second on the 980x vs. 28.5 on the 1090T, for x264 2nd pass encoding, looks quite justifiable. If that's your business, that'll pay for itself in a couple of weeks.
Anand, why do you not try to push the chip on overclocking? Also, why not do an I7 overclocked vs Phenom X6 overclocked performance comparison? Overall, I feel that this review was pretty limited and unenthusiastic for such an exciting product.
Second this. I was really hoping you'd do some overclocked benchmarks, say at 4GHz, so that we could see clock for clock performance of 6 Thuban cores vs. 8 Bloomfield/Lynfield threads.
Check the x264 and Cinebench results. Clock for clock, at 2.8 Ghz, two hyper-threaded Lynnfield cores seems to match three Thuban cores - at least for rendering & encoding purposes.
I try to provide a look at what sort of headroom you can get out of the chip while feeding it as little voltage as possible. The idea is to keep power consumption at a minimum while increasing performance. I found that the jump from 3.8 to 3.9GHz required quite a bit of additional voltage, while just going to 3.8GHz was basically a non-issue - which to me is more impressive than trying to squeeze another 100 - 200MHz out of the chip.
I agree with your approach, Anand. As a customer, and for a general review, I'm most interested in what performance I can get without increasing the core voltage and power consumption.
Pushing the overclock seems more suited for a separate article, or for sites specializing in overclocking or gaming.
+1 on more overclocking testing. It could be a new article dedicated to investigating OC vs stock against the i5 cpus. Also, i remember seing some results for powerconsumption pr performance on different clocks and voltages, that would be interresting to see for the PII x6. I would also like to see what effect just bumping the FSB up from 200 makes, and how high you get it with stable turbo modes. Finding the max stable FSB with lowered multiplier gives a hint to how far it can clock, and how far the non-BE versions can clock.
I'm considering a a 1090T BE or 1055T to replace my 9850BE (too hot, and won't OC) untill bulldozer chips come out. If the 1055T can take a 20-25% FSB increase and stay stable, that would be my choice.
I miss when Intel was on the run for the market... AMD's 6-core does well, but it'd marginally better than a 4-core from Intel... true you get more cores, but is them small performance worth it? I think not.
Overclocking it is definitely interesting, and it would be nice to know whether it's power consumption is much lower or not. It's a lower bin, so it should have slightly higher power consumption at the same clock, but it does have a much lower clock.
We didn't have an actual Phenom II 1055T, we simply underclocked our 1090T and lowered the turbo core ratios to simulate one. This is why we don't have power consumption results for it either.
I find the results of the 7zip compression benchmark a bit weird. I own a Phenom II X4 955 and I'm currently compressing something (800MB) in a windows7 virtual machine with only two cores (way to go MS!). Still, I'm getting something more than 2MB/s (and this virtual machine has crappy disk I/O, maybe less than 4MB/s sustained). LZMA is slower on data that compresses badly so it's not possible to draw conclusions but overall, it still looks weird.
Which version of 7zip is being used? 4 or 9? And which minor version? Also, are you using LZMA or LZMA2? IIRC, LZMA can only use 2 threads while LZMA2 can use more (maybe powers of two, not sure). Also, I think that the 64bit versions are faster. I guess you're using 64bit binaries but it's better to check (running 32bit software on 64bit OSes indeed has a cost), maybe like 5% to 15%.
I'm using 7-zip 9.10 beta and LMZA compression in order to compare to previous results, which unfortunately limits us to two threads. In the future we'll start transitioning to LMZA2 to take advantage of the 4+ core CPUs on the market. Today I offer both the benchmark results (max threads) and the compression test (2 threads) to give users an idea of the spectrum of performance.
From Bit-tech review: Conclusion Despite being an astonishing £600 cheaper than the exorbitantly-priced Intel Core i7-980X Extreme Edition, the X6 1090T BE still isn’t a very good buy
There are two ways you can get cashback with Tiger direct
1) If you look under cashback stores you get a lower cashback. 2) If you use the search bar and type in a key term you will get a different cashback, with TD the key term is "Tigerdirect" with this trick you will get 12.3% cashback
If you do bing cashback and actually search for tigerdirect you get 12.3% bing cash back. $299.99-$36.89 (12.3% Bing Cash Back)-$50.00 (Mail in Rebate)=$213.10 after rebates and bing cash back
Usually AT is my go to hw reviews, but I have to say the overclocking section doesnt even look like any effort was put into it. Its a BE part, and you dont review how well it tweaks? Other reviews on the net have this at 4ghz+, and do all the charts with the oc and non oc included.
I think most people who by the BE part will not keep the stock cooler. I use a TRUE cu, though I realize mainstream might be something a little less. At least throw a zalman 9xxx on there and see what it can do with that. Benched at 4ghz, the 1090T is competitive right up to the 980x (stock), which I think gives people a little more info on how high they can "reach" with this cpu.
I am in no way an AMD fanboy, but the $300 price tag for this performance seems like a leap for AMD. It has always been in my mind price/performance rather than work/clock cycle or the like.
Are those other reviews using a 32bit OS or a 64bit OS? The last time I checked, the Phenom/Athlon II series was still poorly overclocking in 64bit mode. If it's still happening then any overclocking results would vary wildly depending on the OS used.
I understand (and substantially agree with) the comments and conclusions regarding how the 6-cores Phenom compares against 2 and 4 core CPUs from Intel. I wonder though if these benchmarks are capturing the real benefits of 6 cores. In my 'daily' use I have several programs running in background: virus-scan, instant messaging, music players, email clients, browsers (that regularly update RSS feeds) and sometimes also torrent clients. These all consume some CPU cycles, obviously. With all these running in background, I wonder if the difference between a 2-core and a 6-core CPU will be more pronounced. In other words: does it make sense to compare two multi-core CPU by running a single application at the time (albeit, possibly, a multi-threaded one)?
yep, like on my netbook with a z520 atom. AVG + utorrent + FF or chrome no issue (unless of course hd flash movies but that's another story).
On my desktop (which is also a pretty old and crappy e4300) i also have seti@home. so basically i'm always at 100% cpu but still feel the hdd is limiting.
Yes it makes sense... things like browsers, IM, email don't take a lot of CPU. They can easily be juggled by the OS on just a single core (or with hardware, using HT). More processes don't necessarily require more cores. I mean, I am currently running two instances of Visual Studio, a browser with 10 tabs open, Skype, Spark, Notepad++, Outlook and a few other small things in the tray or background, and my dualcore still is at 1% CPU usage, according to Task Manager, and that 1% CPU is Task Manager itself. So why would I want 4 cores, let alone 6? It really doesn't matter.
I think there is something really good that could be tested here, the performance of these chips running virtual machines especially with hypervisor technology..
Any possibility of testing these chips running either xen or vmware and seeing how 4 virtual machines react on each and how 6 react ? is the performance stable etc ? The reason I ask this is for the price point if it can run 6 virtual machines all running off of their own core or sharing cores and can maintain a good performance, it would be really worthwhile investing in these for cheap virtualisation servers..
Well... we haven't seen the consumer priced hex-core Intel parts yet... Everybody is comparing this to the high-end Intel parts (i7-980X is a high-end part). I'll wait to see what Intel's response is before removing them from the table.
Enjoy! the waiting game... I still doubt Intel would lower prices near AMD's 6 core do to the fact that they will be ruining their much more profitable mainstream parts.
Thanks for the review! A good read overall, but a couple of niggles...
Why are there no i7 860 results in the results charts for 7-Zip, Batman, Dragon Age and Dawn of War II? Likewise, why no 1055T data in the power consumption charts?
On the rendering tests page, you said this about the Cinebench test, where the 1090T does well:
"... if you've got a lot of CPU intensive threads there's no replacement for more cores."
But this is contradicted by the earlier comment for the 3dsmax test:
"Not all heavily threaded workloads will show the Phenom II X6 in a good light. Here Intel maintains the advantage:"
Clearly, generalising here is unwise. For some reason, the Ph2 X6s lose out on the 3dsmax test; but why? Lack of cache? Insufficient RAM bandwidth? Any ideas?
What kind of overclocking is possible with a good air cooler? Results with the stock cooler are not really that interesting. I've bought a Thermalright U120E RevC to go with my 860.
Lastly, on the conclusions page, you say:
"Applications like video encoding and offline 3D rendering show the real strengths of the Phenom II X6."
Not so, IMO. The i7 860 did better for the 3dsmax test (I'd really like to know why), and although the 1090T was faster for the Cinebench multithreaded test, if you take into account the higher power consumption of the 1090T, it offers a virtually identical power/performance ratio under load. One of the reasons I chose an 860 system for video encoding was because of concerns about power consumption; I had expected the two extra cores in the 1090T would give a clear advantage in this respect, but it doesn't (slower for the DivX test anyway, which is the encoding system I use, and a worse power/performance ratio for the x264 2nd-pass test). In other words, where the 1090T offers a speed edge, it's small (2 entire extra cores not really helping that much at all), and its usefulness is negated by greater power consumption under load.
Hmm, can you confirm that all six cores in the 1090T were being used during the DivX and 2nd-pass x264 tests? It just seems so odd to me that a 6-core chip isn't doing better. I notice the scaling over the Ph2 X4 965 isn't that great here.
Btw, have you considered using V-ray as a render test? Studios using 3DSMax often use V-ray as it scales to loads of cores AFAIK - I've been speccing out 32-core systems for a company in order to exploit this. Full details at vray.info.
Also, re pricing, the 1090T appears to be about 238 UKP in the UK (eg. scan.co.uk), while the 860 is slightly less (234 from Scan), though after some shopping around I was able to find the 860 for just 205. Clearly, if Intel is even remotely concerned about the Ph2 X6, all they have to do is a minor price drop on Lynnfield to maintain a useful price/performance edge.
Lastly, a word about BIOS support for these CPUs. Recently I've been scanning eBay for AM2+/AM3 mbds, seeing what's available for a friend I'm helping out with an upgrade; checking each board's specs, it's shocking just how many boards have not received BIOS updates to support even the Phenom2 X4 and Athlon IIs, never mind these new X6s. ASUS boards seem especially prone to this issue. So watchout, always check the CPU support before buying. I'm still peeved that my existing expensive M2N32 WS Pro can't use a Ph2 X4 (my gf's cheapo Asrock mbd can). For this reason I went with an Asrock P55 Deluxe for my 860 build as Asrock seem to be much more on the ball with BIOS updates.
If it were up to me, I would make it compulsory for vendors to include CPU support on any board that has a theoretically compatible socket... :}
Ian.
PS. Perhaps a more concrete point to make is that the Ph2 x6s now definitely make the i7 870 utterly pointless given its price (85% more expensive than the 860 for less than 5% higher clock).
The 3dsmax test was an outlier if you look at the full set of results in Bench (www.anandtech.com/Bench). It could be a number of problems. Not all applications do well with core counts that aren't powers of 2 for starters. I don't believe we're memory bandwidth limited, but the 6MB L3 might have something to do with it.
Like many benchmarks it's very difficult to keep 6 cores fully pegged. The x264 HD test ranges from 50 - 94% CPU utilization across all six cores on the Phenom II X6.
I haven't looked at vray as a benchmark, I'll do some digging :) And I agree completely on the issue of BIOS support. You should see how these things work behind the scenes, motherboard manufacturers are often still fixing BIOS issues on brand new boards even days before a launch like this.
Anand writes: > The 3dsmax test was an outlier if you look at the full set of results in Bench
Have you tried other render tests such as Maya Mentalray, or Alias Render?
Note that I have a render benchmark which is extremely cache-friendly and scales to hundreds of threads, might be useful for ruling out the cache as an issue:
> applications do well with core counts that aren't powers of 2 for starters. ...
Beats me why this should be. Threads are just threads afterall. If it's not scaling well because it's not a power of 2 then something somewhere has been badly coded or designed, either the 3dsmax renderer itself, or the chip in some way, or the OS. Hard to tell I suppose.
> Like many benchmarks it's very difficult to keep 6 cores fully pegged. The x264 HD > test ranges from 50 - 94% CPU utilization across all six cores on the Phenom II X6.
This confirms my conclusion then, the X6 doesn't offer anything significantly useful over the 860 in some cases, including DivX encoding. Where it's faster than the 860, the difference is too small to be useful given the higher power consumption, and in some cases even on multithreaded loads it's actually slower. I started reading the article fully expecting the 860 to be trounced for the DivX test; that it was actually slower was a bit of a shock.
> I haven't looked at vray as a benchmark, I'll do some digging :) ...
The systems I've been speccing out include the SGI Altix XE 340, BOXX 10400, SGI Altix UV 10, Dell R810, Dell R910, SGI Origin400, SGI Octane III, etc. ie. all dual or quad socket i7 XEON, 48GB RAM, that sort of thing.
> And I agree completely on the issue of BIOS support. You should see how these things > work behind the scenes, motherboard manufacturers are often still fixing BIOS issues on > brand new boards even days before a launch like this.
I dread to think...
I don't know how the vendors are allowed to get away with it. Such policies force people to ditch their older motherboards even when in theory said board could use a modern CPU. My ASUS board *should* be able to use the X6, but instead it's stuck at the useless Phenom1. Asrock's approach couldn't be more different; that they would bother to add Ph2 X4 support to a board as cheap,old and quirky as the AM2NF3-VSTA is remarkable (socket AM2 with AGP), though I suspect they won't add X6 support for this board.
Please take a look at some of the other reviews around the web (like hexus, tomshardware) and ask yourself if you are using benchmarks that massively favour intel.
I don't believe our full results are out of whack with what others have posted. Heavily threaded apps favor the Phenom II X6 (3D rendering, video encoding, 7z compression benchmark), lightly threaded or mixed workloads favor Lynnfield (gaming, everything else). It's more a question of what sort of balance do you strike between these applications. That's why I culled down the results for the actual review so we have a more balanced representation of all of the different potential scenarios.
What I noticed is that a lot of sites didn't include the i7 860, but only the considerably more expensive, but barely faster 870. I'm glad you used the 860, as it is pretty much in the 1090T's pricerange. And as your tests show, most of the time the 860 comes out on top... while also having lower power consumption.
I AM AN IT STUDENT AND USE A LOT OF VIRTUALIZATION IN LABS, VIRTUAL PC, HYPERVISOR AND SO FOURTH. WHAT I WOULD REALLY LIKE TO SEE IS A VIRTULIZATION COMPARRISION AND BENCHMARK. WE ARE MOVING SLOWLY IN THAT DIRECTION ANYWAY.
To folks asking for a more detailed overclocking review, I would just say that Anand almost always releases an in-depth OC article on a new CPU architecture anywhere from a day to a week later. I think he usually wants to get the basic info out first, then delve into the nitty gritty for those who OC.
I think Thuban could've been a little better realised;
1) Higher uncore speed - whatever happened to the touted 5.2GT/s HT3 link? 2) Triple channel controller - AMD have been using dual channel controllers for the best part of a decade - this HAS to be starving Thuban 3) Keeping the Phenom II's core control system. Phenom I may have been more elegant, but even if Thuban is faster at ramping up the voltages, it'll still result in issues with XP and Vista. So, the targetted audience, at least for Microsoft users, would be Windows 7.
Which reminds me... SysMark is on Vista, an OS known to cause issues with Phenoms. Would this have detrimentally affected the X6's scores, even if two cores are being taxed?
I don't think extra cache would be viable for AMD. The Athlon II X4s aren't far behind equivalent clocked Phenom II X4s even without any L3 cache, plus the added expense and die complexity would've just pushed prices, and temperatures, upwards. Of course, a higher model with 8MB of L3 cache would be nice to see.
It's not really a disappointment to see Thuban fail to topple the entry-level Nehalems. Remember that they're logical 8-thread CPUs and are thus more efficient at keeping their pipelines fed. You can still get a high-end AMD setup for cheaper than the competing Intel setups; just throw some heavily threaded software at it and it'll do very nicely. The new X4s may just give Intel cause to drop prices though.
One final thing - AMD's offerings are known to perform far closer to Intel CPUs when every single bit of eye candy is enabled in games, including AA, and pushing the resolution upwards. It may have been more telling had this been done.
1) Higher uncore speed means higher power consumption and probably less power efficiency. 2) You would need a new platform that makes the current one obsolete. You would also need much more time and money to validate. 3) I actually see no problem.
Sry, but your claims are unrealistic or pointless.
"1) Higher uncore speed means higher power consumption and probably less power efficiency." You could just reduce the clock speeds to compensate, assuming a higher uncore yields a satisfactory performance increase. The i7-920 has an uncore speed of 2.13GHz and Phenom IIs at 2GHz.
"2) You would need a new platform that makes the current one obsolete. You would also need much more time and money to validate." Fair dos.
"3) I actually see no problem." The potential for a thread hitting an idle core would still be there as, even with Turbo CORE doing its thing, there would be the potential for three idle cores, however this will be minimised if AMD has decreased the delay needed for a core to ramp back up from 800MHz.
"Sry, but your claims are unrealistic or pointless." That's fine.
Nice read; well done Anand! Are you planning to do an OC follow up like you,ve done in the past. Also I noticed that on the second "CPU Specification Comparison" chart on the first page "AMD Phenom II X4 965" is included twice. p.s. What's IOMMU? Can someone explain please?
The short answer is that an IOMMU is a memory mapping unit (MMU) for I/O devices (video cards, network controllers, etc). For most readers of this site, the only time they'd use an IOMMU is when using a virtual machine, as an IOMMU allows the virtualize OS to talk more or less directly to the hardware by translating the virtual addresses to the physical addresses the hardware is using. However it does have other uses.
While in general the upper end i5/i7 CPUs are a bit faster, there are other costs involved.
Basic AMD boards are $80, upper end Cross-fire boards are $100~150, while typical P55 boards are $100~200, and X58s are $200~300.
So, with cost of the intels is about $150~400 in price to get a few seconds performance improvements.
Also, if someone buys a lower-end i3 CPU, they can't upgrade to a top end i7 CPUs because of different CPU sockets. While those who bought an AMD class board a year ago will most likely have the option to upgrade.
Who would want to run a six-core CPU on a two-year old board though? You wouldn't have things like USB 3.0, SATA 6 gbps, probably not even PCI-express 2.0 either. I don't think it's a good idea to upgrade a CPU in the same board in general. Usually the old board will severely hamper performance, so you're not getting your money's worth for the new CPU. And you'll miss out on the new features. In all the years that I've been building PCs for myself and for friends/family, I have never found a CPU-upgrade very compelling, and I wouldn't recommend it to anyone.
I wouldn't be surprised if it's better to get a 1055T instead of a 1090T, and spend the extra money on a new board, rather than running the 1090T on the old board. There's a very interesting article in there for Anand I suppose.
All power saving features were enabled on all chips - C1E and CnQ were enabled on the X6. You need to enable these features otherwise Turbo Core doesn't work.
I meant C1E obviously, and what about overcloking? Just a half heartet attempt? makes me wonder if there is some fanb** no, im not going to say that word, iv always seen you guys as very professional and cant quite believe that, im hopeing that you post the 4,2GHz results very soon. :)
Considering the intended uses for this type of processor - i.e. heavily threaded applications - it offers more performance for the price as compared to anything Intel is throwing out.
Yes, Intel's products are faster per clock, but they just can't match AMD for performance at a given price point. Guru3D touched upon the point that you could build a 1055T machine for $600; that's $400 less than the 980 on its own. How much would it cost to build a 980 setup, or even a 930 solution in comparison?
Even with the 930 being 200 at MC(I got one nearby so yes I know that), X58 motherboards are still more expensive overall. The cheapest is 160+, and goes to 400. The thing is with x58, just finding a decent board that doesn't have a bunch of weird problems voiced by owners on forums/newegg is very very difficult. One has to either spend around 300 to perhaps have a higher chance of buying a relatively problem-free board, or just pray the board that arrives doesn't have one of the memory slot DOA(seems to plague quite a few mobos).
I don't think the situation is anywhere nearly as bad on AM3 side.
Too bad it's not like the old days (5+ years ago) when boards would just WORK without having to eff around to even keep it stable at stock speed.
If you are building a new system, of course you will want to use DDR3 memory for there is little to no price difference between the two.
But some people have older systems. I wouldn't be surprised if people are upgrade from their AMD 5000+ or 6000+ and they have a compatible motherboard and memory. I am just wondering how much of a bottleneck is the DDR2 and will it cause any rational not to do a partial upgrade but instead a full upgrade.
It doesn't have to be all the tests, just a couple.
That my Q9450 clocked to QX9770-spec is still very competitive.
I think it will be awhile before I change platforms, especially what with Intel planning socket changes for 2011. By then, we should see some more interesting options and maybe DDR3 will drop in price --I can always dream.
When testing your 1366 i7 CPUs you need to use 6gb of ram. Why? Because they can utilize that extra bandwidth for increased performance. You stated that early on, but since 1156 came out your tests have been only with 4gb of ram. Drop 6gb of ram in the 1366 and retest. I'm sure that is what most 1366 owners are using in their systems.
In order to keep memory size constant we use 4GB on all systems (4x1GB on the LGA-1366 boards). You get full triple channel access for the first 3GB of memory, only the final 1GB is limited to dual channel. It's the only compromise we could come up with short of giving LGA-1366 a memory size advantage.
The vast majority of our tests fall in the 2 - 3GB range, so I don't believe we're holding back the Nehalem/Gulftown performance much if at all here.
Applications that will use that much memory and memory bandwidth. Off the top of my head, I can only think of moderate-heavy database use. Too many people tend to look over the 1156 platform over to 1366 because of triple channel memory when in reality, even more heavy gamers, you're just never going to fully utilize that much memory or bandwidth.
I run a 4GB system, running games and oodles of apps at the same time, I've never seen my memory jump past 3GB or far past it.
So it really comes down to what your use for a system like the i7 9xx & X58 would be if you will really need the triple channel bandwidth and extra memory.
Uncompressed HD editing easily uses more than 4GB RAM. Anyone using an X58 with a Quadro card for professional work should definitely have 6GB minimum.
I was hoping to see some benchmarks in Battlefield Bad Company 2. I thought Anandtech had added it to the gaming tests. We know the game scales well using a quad core vs a dual. I was curious to see the difference between 4 vs 6 cores.
I'd be curious to see how this stands up in the VM tests you did earlier this year. At face value, it seems VMs = more threads and this proc would be of value.
Thanks for including the old P4EE, the E6850, and even the QX9770. I think a Q6600 or a Q9550 makes more sense than a QX9770, as there will be many, many more readers looking to upgrade from those processors, but at least I can extrapolate from there. They all face the issue of having to replace DDR2 with DDR3.
Which gets me to the point of my comment, that little to nothing is made of the situation of a user with an older AM2+ or early AM3 motherboard that is deciding whether to upgrade the CPU (easy drop in), M/B+RAM+CPU, or pass. Again, many, many more AMD using readers in that position than QX9770 using readers. Both AM2+ users and Core 2 users have the same dilemma, when they upgrade, they have to replace DDR2 with DDR3 as well. That makes upgrades not only expensive, but exercises in future-proofing, since these big expenditures need to be justified by long life and upgradability.
In that vein, I would have liked to have seen an AM2+ motherboard in the reviews to verify compatibility and identify any performance differences. Previous reviews identified little difference between DDR2 versus 3 but now we have 890 and hex core to consider in the mix.
Many of us upgrade step-wise, that is, get an AM3 M/B and DDR3 today and use our old formerly-AM2+ CPU for a while, and then look to upgrade to Hex core in A3. Or maybe the other way around. We look to articles like this for guidance, but there seems to be an assumption that everyone reading is trying to decide whether to upgrade from a QX9770 to a 1090T, which is not only unlikely, but silly and pointless.
The few QX9770 users are going to fall into two camps. Those who simply buy the next Intel EE, and those who will never, never, ever blow their money on that kind of price/performance again.
In short, I think Anandtech should start considering more the common case of a user looking to jump generations, as opposed to silly non-questions like comparing 790 to 890 chipsets. If I bought it last year, unless some huge performance jump has been made, its going to stay in service for another two. For those users, the questions are, what CPU is it going to be running at that time? Should I buy that CPU now or in the Autumn?
The most keenly interested users are the ones looking to either upgrade their old AM2+ M/B "for another year", or those stuck with last generation hardware who are trying to figure out where and when to go from there. The latter are unlikely to be interested in these hex core CPU's, but they need to read the article to find out.
HangFire writes: > Which gets me to the point of my comment, that little to nothing is > made of the situation of a user with an older AM2+ or early AM3 > motherboard that is deciding whether to upgrade the CPU (easy drop in), > M/B+RAM+CPU, or pass. ...
Some good points there. Indeed, users with older mbds that could theoretically use these newer CPUs will find it hard to locate useful info on whether an upgrade is worthwhile or if their older base hardware is holding back performance.
This is made more complicated by two issues:
a) There are AM3 motherboards that use DDR2 RAM.
b) Lots of AM2 boards have not received BIOS updates to support even the quad-core version of the Phenom2, never mind the new 6-core chips.
Way back when the Ph2 came out, I had hoped to write a review of the chip by upgrading my existing AM2 6000+ 3.25GHz system (4GB DDR2/800 RAM). It would have been an interesting comparision, but alas ASUS hasn't bothered to release a BIOS update to support anything newer than the useless Phenom1, so I'm stuck. In the end I just bought a new mbd from a vendor which seems to care much more about BIOS updates (Asrock) and an i7 860.
> have the same dilemma, when they upgrade, they have to replace DDR2 > with DDR3 as well. That makes upgrades not only expensive, but
The other alternative is to buy one of the mbds that have an AM3 socket but still use DDR2 RAM, allowing one to retain one's existing RAM kits. For example, the Asus M4A77D is a reasonably priced board, Socket AM3, supports Ph2 X4, uses DDR2 RAM:
But again the condumdrum: will ASUS update the BIOS to support the Ph2 X6? Anyone's guess. My board cost 100% more and they didn't bother even adding support for the Ph2 X4 (btw, I did ask ASUS about this, their response was, it's an old board, who cares). By contrast, Asrock has already added X6 support to many of its older/cheaper boards. I asked Asrock for upgrade advice and they were very helpful.
> differences. Previous reviews identified little difference between DDR2 > versus 3 but now we have 890 and hex core to consider in the mix.
Good point, a X6 in a DDR2 setup vs. a DDR3 setup where the RAM and all cores are being hammered (eg. video conversion, animation rendering, etc.) would certainly be interesting.
> Many of us upgrade step-wise, that is, get an AM3 M/B and DDR3 today > and use our old formerly-AM2+ CPU for a while, and then look to upgrade
I agree, I did something similar. Originally had an AM2 6000+ AGP setup, switched to a PCIe board so I could keep the CPU and RAM, replaced the gfx (X1950Pro AGP) with an 8800GT. Naturally I'd been hoping to switch the CPU to a Ph2 X4 later, but no such luck.
Anyone know what Gigabyte is like with their BIOS updates? I *almost* bought a Gigabyte board for my new build, but went with Asrock after deciding the latter's slot spacing was better suited for my needs (I plan on fitting a PCIe RAID card, among other things).
Btw, Gigabyte also has an AM3 board which uses DDR2 RAM, the MA7770-UD3:
"Anyone know what Gigabyte is like with their BIOS updates? I *almost* bought a Gigabyte board for my new build, but went with Asrock after deciding the latter's slot spacing was better suited for my needs (I plan on fitting a PCIe RAID card, among other things)."
I have been using Gigabyte for the past 6 years and found them awesome in their ability to upgrade both enthusiast and non-enthusiast boards' BIOS for the latest processors.
ASRock is in a class by itself in supporting the step-wise upgrader, but (until recently) usually performance took a back seat.
ASRock also offers only 1 year warranty and they don't even do it through themselves, they ask you go to back to w/e brick & mortar shop or online store (they call all of this the "authorized retailer/distributor") and have them handle getting you a replacement. Which most will only do within 30-90 days of purchase.
Unless you really really want to save money or mix and match parts that many ASRock boards let you do, I would never recommend an ASRock board over an ASUS or Gigabyte one.
ASRock also offers only 1 year warranty and they don't even do it through themselves, they ask you go to back to w/e brick & mortar shop or online store (they call all of this the "authorized retailer/distributor") and have them handle getting you a replacement. Which most will only do within 30-90 days of purchase.
Unless you really really want to save money or mix and match parts that many ASRock boards let you do, I would never recommend an ASRock board over an ASUS or Gigabyte one.
"Price point" does not make you sound cool, it just adds an extra word for no reason. Just say price, man. We understand how it varies. Toss it atop the garbage heap of unecessary buzz terms like "solutions" that make computer industry talking heads look like try-hards.
Could you post the exact model numbers of the Corsair memory kits you used? It'd be much appreciated.
Great review very informative. Intel still holds the performance crown but it's nice to see AMD keeps staying right on Intel's tail, keeping the pressure one for more innovations from both camps.
I see no attempt to run the memory and the NB a bit quicker. Lets not forget that the Black Edition is made to run comfortably with DDR3@1600 MHz and the NB@2400 MHz. It gives a significant boost to all operations.
My system: AMD Phenom II X4 955 BE DDR3@1600 NB@2400 | MSI 790FX-GD70 | 4 x 2GB = 8GB OCZ Platinum DDR3 | Intel X25-M G2 | Asus Radeon HD 4770 | BenQ FP241VW 24" LCD Monitor | Antec Neo HE 550 | Cooler Master Hyper 212 Plus | WinXP64 SP2
So the AMD 6 core is not as good as Intel's line when it comes to one benchmark at a time. But what about running multiple instances of programs and assigning it certain cores or just letting it run freely. For example, I would like to run 6 instances of DVD shrink to rip and burn a DVD to an ISO or run 6 instances of Handbrake and encode 6 different video files at the same time rather than batching them. Is that possible? Oh and Virtual Box or VMware 3 OS with dual core and testing it's performance. Can someone do that please or send me a CPU/Mobo and I will test it out. Thanks.
I agree. If it's a struggle to utilise all six cores at 100%, just add another program to the mix. This may just prove once and for all if a physical Stars core can beat a logical i- core, and thus whether AMD were right to launch Thuban in the first place.
I'll say a few things to that... A physical Stars core actually has to beat TWO logical i-cores. After all, we have 6 Stars cores vs 8 logical i-cores. So if we were to say that the 4 physical cores on both are equal (which they're not, because the i-cores have an advantage), that leaves 2 physical cores against 4 logical cores.
Another thing is that if you have to work hard to set up a multitasking benchmark that shows Thuban in a favourable light, doesn't that already prove the opposite of what you are trying to achieve?
I mean, how realistic is it for a consumer processor to set up Virtual Box/VMWare benchmarks? Doesn't that belong in the server reviews (where as I recall, AMD's 6-cores couldn't beat Intel's 8 logical cores either in virtualization benchmarks)? Virtualization is not something that a consumer processor needs to be particularly good at, I would say. Gaming, video processing, photo editing. Now those are things that consumers/end-users will be doing.
@mapesdhs Theres no such thing as an AM3 board with DDR2. Only an AM2+ board with DDR2 that has AM3 support. The MA770-UD3 you gave as an example is an AM2+ board with AM3 compatibility. "Support for Socket AM3 / AM2+ / AM2 processors". AM3 boards do not have support for AM2+ and AM2 processors.
Could someone please tell me the difference between the Phenom II X6 1090T & 1055T.
I would like to put one of these new chips into my Gigabyte DDR2 MB but the Gigabyte web site says my board only supports the 1035T and the 1055T chips. My board is rated @ 140 W. ( GA-MA770-UD3 )
I am currently running a Athlon 64 x2 6400+ ( 3.4Ghz ) and I do not want to loose to much clock speed by going with 1055T ( 2.8 Ghz ).
I'm waiting for them to cough up a new arch that delivers MUCH better per-core performance.
There is just no value proposition with their 6 core CPU that mostly matches a 5 core i7 920 which can be had for a roughly similar pricepoint, i.e. i7 930 $199 @ MicroCenter.
Either way unless I win the giveaway :D, I'm now planning at least until next year to upgrade the desktop to see how Sandy Bridge comes out and IF AMD manages to get out their new CPU. I figure that I may as well wait now for the next sockets LGA2011 for Intel, and what I'm sure will be a new one for AMD with their new CPU. As an added bonus I'll be skipping the 1st generation of DX11 hw, as new architectures to support new APIs DX11/OGL4 tend to not be quite the best optimized or robust, especially apparently in nVidia's case this time. (Although AMD had an easier time of it as they made few changes from R7XX to R8XX as is usual for them. AMD need to really start spending some cash on R&D if they wish to remain relevant.)
The true point of the X6 is heavy multi-tasking. I'd love to see a real stress test thrown at these to show what they can do, and thus validate their existence.
You would have to be insane to pay $1000 for a chip that may be good for gaming. at $199 with slightly lower performance its a no brainer. When I build a system, I don't care if the frame rates etc is 10 to 15% better. Who cares ; the chip is fast and I have not problems playing high end games. I have no special setup and it does everything that my friends I7 can do. Good for me I get more pc for the buck . Go ahead and go broke buying just a motherboard and cpu when I can get a modern motherboard a cpu, 6gigs of ddr3 1600, a 1tb hd and a dvdrw. More for me.
I would really like to have seen a World of Warcraft test with there CPUs like you did with the Intel 6-core. It would be interesting to see if WoW can use all Core's and to what performance.
Not sure why there is no Power vs. Performance vs. Price comparison of the different processors. As for the performance, it could be anything that you want, such as Gaming Performance or Video Encoding.
Such a comparison should be interesting, since you may as well pay back the higher initial price via power savings.
There appears to be a disparity... In the forums, the guys who have the 1055Ts are getting 4.1GHz on 1.42v, and are doing a lot of very stable benching. It appears to be more of the rule than the exception...could you have gotten either a bad board or a bad chip? http://forums.anandtech.com/showthread.php?t=20698...
Has anyone here bought a 1090T? How did you find it? Particularly interested in those using their systems for video encoding and/or animation rendering.
I have been jumping back and forth on this issue while trying to decide if I should go with AMD or Intel for a gaming machine. Until the point where I read this article I was almost completely swayed to the side of Intel however when reading the specs of gaming performance I was somewhat surprised by just how close AMDs chip actually came on most points. I understand that the 1090T got the tar kicked out of it a lot during this comparison. However, I have to consider the fact that upgrading to a Quad Core now will almost certainly result in me having to change motherboards down the road when ( not if ) Intel decides that they no longer want to support their chipset. It makes the idea of buying a second gen chip seem like a bad choice even if it has a slightly higher performance.
The only comparison that made me cringe was Dawn of War II and possibly Dragon Age: Origins (but only because the core i7 980x had an impressive 170fps). With the exception of Dawn of War II these framerates seem so high (and in most cases close together ) that I can't really imagine there being a noticeable difference. So my question would be this: If you're running two or more high end graphics cards in CrossfireX on the 1090T are you really going to notice any difference on a consistent basis compared to a Quad Core or are we just splitting hairs at this point?
I'm a chess player. I use deep rybka 4 SSE42/SSE4A based engine. And I find 1090T is faster than i7-930\920\870\860. And i7-965/975/980 is too expensive, so 1090T is my best choice here.
Price of a 1055T platform : £350 Price of a i7 860 platform : £550 Price of a i5 750 platform : £450
Also there is talk of the AM3 support for the new AMD processors (Bulldozer, 8 core 28nm).
Personally, I have no complain against my 1055T. Runs very cool and quiet (Corsair H-50), and I have good perforamnce coupled with a HD5850, copes with anything. It's a decent mid-high spec system.
The Intel / Nvidia board is also an excellent gaming platform, especially with the arrival of the new GTX 460, that can compete directly with the HD5850 at a lower price point (which will no doubt be reduced at some point).
I was think about updating my E6850 so I could play better games. Looking at your review, I clearly see the chip is not the bottleneck but the video cards are, I can go with a new SLI configuration and really rock. Saved me a lot of money - thank you!
get your i5 or i7 to 4.5 hahahahaha put your 3d mark up the amd 1090t is the second best cpu hands down..................................................................................
I purchased an ASUS CG1330 with a Phenom II X6 1035T/2.6 GHZ.No where do they mention the turbo function.Is this something I can turn on or is an automatic feature?I bought this after having a Gateway 6840 which over 10 days died.When it requested that I insert the restore disk I realized that the optical drive did not have a physical eject button,consequently I could not insert the restore disc,can you say"catch 22".I bought it at cosco who does not offer on site tech help,so I had to return it,the last one of course ,and go to Best Buy.Lessons learned,support,support!
Please check the site out, it is not a Intel Fanboy site or a AMD Fanboy site, but has the info you need, this page was an old post but the site has the info from then and now and it looks like - wait for it - wait for it - Intel just plain kicks ass - and all us Computer Geeks know that.
I have disabled the middle 2 cores on my 1055T (240Mhz FSB auto voltage, CnQ, and Turbo enabled) and it works quite well at bringing down temps when gaming. When turbo core is active the voltage of all cores goes up to 1.475, so disabling the cores saves power and temps.
This works well in the summer, during the winter all heaters are enabled.
sir i wanted to upgrade my pc with amd phenom2 1090t and i' m new to amd processors. but sir please suggest me which motherboard i will choose i don't know much abt motherboard???? please also tell me the price rate in india of phenom2 1090t and motherboard???????? i will be very thankful to you....waiting for you reply!!!!!!!
Sounds a lot like the cycle magazines touting one bike having 1.5 HP more than the other, but really, how much riding is done nearly bouncing off the rev limiter?
Same thing with these chips. How often are you going to experience the slight difference between these top of the line "at it's limits" Intel chip compared to the AMD at 1/3 the price?
I think the point of the article was that the huge cost savings of an AMD offsets the slight difference in performance.
BTW, all my computers have Intel only because that ia what they had in them, and if in the future I needed to make a choice, I would need better info like this article offeres.
I could have went with the AMD 6 core for less than my i7 IntelQuad core cost!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
168 Comments
Back to Article
Eeqmcsq - Tuesday, April 27, 2010 - link
Hey, found a typo, on the AMD Turbo page:"While mainstream CPUs are down at 65nm". I think you meant 65w.
Anand Lal Shimpi - Tuesday, April 27, 2010 - link
Thank you! Fixed!falc0ne - Thursday, April 29, 2010 - link
:O) lolRick83 - Tuesday, April 27, 2010 - link
there's also an "i7 670" in one of the charts on the first pages, which should clearly be an i5.msc_chandu - Tuesday, September 7, 2010 - link
nm refers to nano-meter and 65nm is a technology node. So, the processor manufacturing has been rolling down. For example, 250nm to 130nm to 90nm to 65nm to 55nm etc...It is not a typo and was mentioned correctly. :)
DaCentaur - Wednesday, September 15, 2010 - link
Chandu naanna, if you take a look at the context, i.e. the sentences before and after the "wrong" sentence, you should be able to understand why it should be "W" instead of "nm"."High end desktop CPUs now spend their days bumping up against 125 - 140W limits. While mainstream CPUs are down at 65W. Mobile CPUs are generally below 35W. These TDP limits become a problem as you scale up clock speed or core count."
Ardhamainada? :D
creathir - Tuesday, April 27, 2010 - link
As usual Anand, the article was great and really helped with figuring out where I am taking my wife's next computer. I was tempted to go AMD this time around for her next PC, but after reading this I believe I will stick with an Intel solution. Thanks man!- Creathir
webmastir - Tuesday, April 27, 2010 - link
yup. agree...very informative article. thanks!pow123 - Wednesday, May 5, 2010 - link
You would have to be insane to pay $1000 for a chip that may be good for gaming. at $199 with slightly lower performance its a no brainer. When I build a system, I don't care if the frame rates etc is 10 to 15% better. Who cares ; the chip is fast and I have not problems playing high end games. I have no special setup and it does everything that my friends I7 can do. Good for me I get more pc for the buck . Go ahead and go broke buying just a motherboard and cpu when I can get a modern motherboard a cpu, 6gigs of ddr3 1600, a 1tb hd and a dvdrw. More for me.abd-jbr - Friday, June 18, 2010 - link
i compared intel and AMD from almost AMD start ( K6 II ) CPU , i believe that AMD did not change their Policy much : they still offer more for less , with one different , in that time the market was also different from today , now a days , you can easly found all the hardware you need and it's compatible and works with AMD ,it's accurate Say : it's not about absolute Perf , it's about pref at a given price
and in this , i think AMD wins .
Boogaloo - Tuesday, April 27, 2010 - link
That's pretty disappointing scaling from the 965 to the 1055T.. I'd be willing to bet the lack of added cache (not to mention the memory controller issues the article raises) are really holding back the performance. Unfortunately, 6 cores at 45nm hasn't left them with much of a choice as far as that goes.SonicIce - Tuesday, April 27, 2010 - link
looks like the 7z compression rate went down with turbo core enabled? an error?Ryan Smith - Tuesday, April 27, 2010 - link
Yeah, we accidentally flipped the chart. It has been fixed.FragKrag - Tuesday, April 27, 2010 - link
A bit underwhelming since before I read this bench, I saw overclock3d benches (they had 1090T at stock absolutely thrashing an i7 at 4GHz). Seems like the ideas that most of the people had with threading were true though. Overall I think they should be decently competitive because they do hold the advantage in cores.It's nice to see AMD pushing Intel a bit, though this does not make me regret my earlier purchase of the i7 860 as much as I would have liked it to.
Is there some kind of error with TurboCore? It doesn't look like it's working because those Dragon Age (optimized for quads?) and Dawn of War FPSs were a bit disappointing! :(
beginner99 - Tuesday, April 27, 2010 - link
Well as always for the last couple of years. This can't go on forever. Just if you think about the margins that intel has and the one that AMD has (if any...).The Problem I see is that in the cheaper price regions (=mainstream) users do not need a ton of cores. In the end most could easly live with 1 core even though 2 would probably lead to a "feelable" difference.
So I'm not sure if it's very intelligent to invest money in these designs. Just look at the die shot. I'm not an expert but you don't need to be to see that the just attached 2 more cores to an existing design probably having to make a ton of trade-off's (like the l3 cache).
I mean it is theoretically great. The easy upgrade path but again, how many users actually ever upgrade? IMHO a tiny fraction.
Of course thise i3/i5 dual cores from intel are overpriced. But that's what most company PC's will use. Huge profits for intel.
I hope at least AMD's answer to atom will be very good (=alot faster, same power consumption). But I doubt it...
ET - Tuesday, April 27, 2010 - link
The entry level crowd isn't the problem. Someone who only requires basic performance should buy the cheapest CPU, and AMD has that slot. For the performance crowd, Intel is the clear winner, but it's not the majority of the market. The real issue here is the price performance crowd, the people who want good performance for a good price. This is a pretty large market and AMD is trying to win here by reducing the price, but it's not doing great by this review.I think AMD still makes money on all CPU's, just not nearly as much as Intel.
Scali - Tuesday, April 27, 2010 - link
"Someone who only requires basic performance should buy the cheapest CPU, and AMD has that slot."I think that's his point though?
This CPU is not the 'cheapest'. Why invest in a design like this?
As Anand says in the conclusion, for 'mixed' usage, the i860 is a better choice. Only people who need heavily threaded performance may want this CPU. Especially at the pice at which AMD is selling these six-cores, can they really get a good return-on-investment on the development of this CPU?
Seems like dualcores and quadcores is where the bulk of the sales will be.
jamyryals - Tuesday, April 27, 2010 - link
Your argument is short-sighted. You can't stay with 4 core models indefinitely and expect to remain competitive in the future. R&D money has to be spent on the new technology even if your niche is the Price/performance budget sector.Scali - Tuesday, April 27, 2010 - link
I think it's short-sighted to just look at the number of cores.What AMD REALLY needs is a core architecture with a good IPC and relatively cheap to produce. That is what made the K7 and K8 so successful.
Currently AMD is just throwing more transistors at the problem, which is not a good thing, given their position (45 nm process vs Intel's 32 nm).
If anything Intel proves that it's not the number of cores, since Intel's quadcores generally outperform AMD's six-core. So as it stands today, Intel can stay with quadcores just fine.
beginner99 - Tuesday, April 27, 2010 - link
exactly my thought.But I must admit I did not think about servers at all. If it was easy to adopt a server cpu which had to be made anyway then it's a different story. (is it a half-magny-cours?)
But generally i would say a complete new architecture is needed soon to stay competitive.
Phenom architecture is quite old now and never was that good anyway especially compared to core architecture...
Scali - Tuesday, April 27, 2010 - link
I think the same argument goes for the server market...AMD cannot compete with Intel on performance, so they have to compete on price, meaning low profits.
AMD *could* compete with Intel in the server market, but that was before Nehalem. Now Intel has some very strong offerings in the server market...
So it's the same story either way... should AMD really try to build such large high-end CPUs, when they know they won't be able to compete on performance anyway, and performing on price is dangerous when your competitor is always a manufacturing node ahead.
AMD *could* have had the Atom market, but they actually killed off their Geode line of CPUs just before Atom was introduced and pretty much owned the netbook market.
realitycheck - Tuesday, April 27, 2010 - link
Everyone keeps mentioning the fact that AMD needs a new architecture and that they cant remain competitive with the current yet aging Phenom core, and AMD is wasting time and money by just throwing more cores at the problem.I think you guys are missing the bigger picture, the Phenom II x6 is meant solely as a stop gap, an effort to remain somewhat competitive and keep revenue flowing while they finish baking Bobcat and Bulldozer, due out next year. AMD knows that the Phenom II its a show stopper, or even a headlining act, thats why these CPU's were released without any major fanfare, they arent trying to fool anyone into thinking these chips are something they're not, they're simply trying to hold down the fort.
I think that once complete, the Bobcat / Bulldozer line will be what puts AMD back on the competitive front. just simply looking at the code names they chose, the last time AMD got bold and used suggestively strong code names, tack hammer / claw hammer / sledge hammer, the end products, Turion 64 / Athlon 64 / Opteron lived up to the expectations, and in fact they proved to be game changers. I believe the same will hold true with Bobcat / Bulldozer.
Another point i'd like to make is, over the past 10 years AMD has matured quite a bit, growing up from being a completely non-threatening low end sub-generic CPU builder, with seemingly no chance of ever amounting to much more than that in a tightly controlled Intel world. From those dark days AMD has grown to become a global player in the market and a competitive threat to intel. they've come along way from the AMD of the 90's. Most people think that the whole Barcelona / Phenom fiasco, with the TLB erratum and delayed launch cycles has spelled the end of a competitive AMD. I think that whole fiasco is exactly what AMD needed, people learn the most from their failures, not from their victories, just look at intel, they had all but stagnated all thru the 90's and the first half of the 2000's. It wasnt until AMD came along and kicked them right square in their out of shape and boated ass and then out ran them for a few years, that they go back in shape so to speak and started making competitive and compelling products again. So only time will tell if AMD has what it takes to be a competitor, or just a fluke...
As for this review, why did you use such old drivers for the AMD chipset and graphics card? i mean seriously, the chipset drivers you used were developed a couple years before the 890FX chipset you were using was released? how about using 10.3 over 8.11 and 9.12? or is there something special about the 8.11 and 9.12 drivers? given the gap between your versions and the current, i'd says your somewhat kneecapping the tests..
Scali - Wednesday, April 28, 2010 - link
"I think you guys are missing the bigger picture, the Phenom II x6 is meant solely as a stop gap, an effort to remain somewhat competitive and keep revenue flowing while they finish baking Bobcat and Bulldozer, due out next year. "I don't think we're missing the bigger picture, we're pointing out exactly the same:
X6 is a stop-gap, what AMD *really* needs to become competitive again, is Bobcat/Bulldozer. We're just saying that they need it NOW, rather than next year (and they have to be a success aswell, not the fiasco that Phenom was... a new architecture is no guarantee for success obviously).
AMD has been struggling ever since the Core was introduced in 2007(!). They REALLY need a good, new architecture now, it is long overdue.
gruffi - Wednesday, April 28, 2010 - link
I think you are missing a lot. What the hell are you talking about? Intel is throwing more transistors at the problem, not AMD. Compare the cores, Intel needs >50% more transistors per core. They need 8 threads for Nehalem to get full performance, AMD only 6 for Thuban. Intel is just brute-force. Already today AMD has a smarter and more effective design. And the really interesting designs (Bulldozer, Llano, Bobcat) are not even launched. The only problem for AMD is software related, not hardware. The widely used compilers under Windows, MSC and ICC, optimize much more for Intel. And your knowledge about servers also seems to be outdated. Compare Magny-Cours and Gulftown. The Opteron is more powerful and more power-efficient. And it is still 45 nm. You really want to claim AMD isn't executing well? Then you must be joking or an Intel flamer. X6 and i7 are not for people that utilize one or two threads most of the time. Thuban is not meant to be a mainstream CPU. Therefore, Deneb, Propus and Regor are there. Thuban is for highly threaded workloads. And then an X6 1090T ($295) can outperform an i7 860 ($284) and also an i7 870 ($562). If you ask me, the X6 1090T offers very good value for such scenarios. The X4 1055T ($199) offers even more value.Scali - Thursday, April 29, 2010 - link
Intel doesn't "need" more transistors, they are just in the position that they can implement a lot more cache than AMD, without getting into latency problems or getting too much power consumption.This 6-core 125W TDP processor has problems keeping up with Core i7 8xx series, which are only 95W TDP.
So how is AMD more energy-efficient? Only if you compare it unfavourably.
magic box - Sunday, September 4, 2011 - link
if you wanna pay for intel advertisment ya pay more then.it like people pay for HP or Dell a rip off
when we all no the asus or the msi in the corner that nobody talks about is so much better.
us ur head
apple ipad the over sized ipod touch that people love to buy only because of advertisment yet you can buy a windows touch tablet at harf the price.
the intel chip has had bad times too the have sold bad chips with cores that dont work and so had amd they are both the same.
but if you wanna get sucked into intels rip offs then thats ur own fult
it will be funny when i have the amd 8 core and lock 4 of the core and then ill compare it to any intel chip by the time intel make anythink better then that people will not be paying $3000 for the same thing at $400
magic box - Sunday, September 4, 2011 - link
you talk about intel quadcore being better then AMD 6 core you really need to look at locking and unlocking locking 2 of the 6 AMDs cores it will crap all over the intelstrikeback03 - Tuesday, April 27, 2010 - link
I would guess the hex-core is a lot more useful in servers, and there is a consumer version of the chip since the bulk of the development is expected to be funded from server sales.JGabriel - Tuesday, April 27, 2010 - link
It might be better to think about this way:Most office and browser applications run quite fast enough, even on slower chips. A 67% performance difference will frequently amount to only a few microseconds, or less, in execution time.
As for gaming, casual and moderate gamers will be happy as long as they hit 30 frames per second at 1080p at moderate detail, which mid-range dual cores can handle in most games.
So the big differentiator is multi-media: very intensive applications where speed or extra cores, depending on the app, can significantly reduce runtime. For instance, the hypothetical 67% performance improvement mentioned above can change a 90 minute encoding job into a 54 minute job. Or result in lower times while using Photoshop. Or let you run a scheduled anti-virus scan in the background without impacting your gaming or video performance.
For many people, possibly most, more cores (or more threads) is more useful than clock speed, because they'll only *notice* the difference when running processor-intensive, heavily-threaded apps, or when heavily multi-tasking. And that is becoming increasingly more common as multi-media becomes a primary home use for PC's
If that's not a factor in their typical usage, then users are probably better off with an i3, or - and here comes the extra core advantage again - an Athlon II X3, which performs similarly to the i3, but for $40-$50 less.
As for profit margins, AMD just posted one of its best quarters ever, so the "more cores for the dollar" strategy, which provides more processing power where users are most likely to actually notice it, seems to be working to some extent. No, AMD won't overtake Intel, but it can be a profitable mainstream niche - and Intel is not likely to challenge them too seriously, for the time being, given its current problems with anti-trust in the US and Europe.
.
strikeback03 - Tuesday, April 27, 2010 - link
Do AMD's numbers for the quarter include graphics sales? As that should have been very profitable for them, being mostly the only game in town and able to sell above MSRP.JGabriel - Tuesday, April 27, 2010 - link
Good point - and yes, it does. But revenue from the microprocessor unit also increased by 23%, according to Businessweek ( http://www.businessweek.com/idg/2010-04-15/amd-swi... )..
kenupcmac - Wednesday, December 1, 2010 - link
so now amd x6 is better for 3dmax compare to intel i7?Drazick - Tuesday, April 27, 2010 - link
Could you add some Matlab Benchmarks?Moreover, do you think most of the performances advantage of Intel processors comes from highly optimized code (Towards Intel Cores)?
It's something that should be investigated.
gruffi - Wednesday, April 28, 2010 - link
Yes, absolutely. I wished more GCC builds would be benchmarked. Intel's architecture is not as good as many people believe. Most of Intel's advantages come from better software support.pjconoso - Tuesday, April 27, 2010 - link
Intel 6-core early adopters will feel like they were ripped-off. I have the Asus USB 3.0 (for review) for a few weeks now and I'm waiting for this processor to test it with - thanks for the review, I'm sure it'll help me a lot.Scali - Tuesday, April 27, 2010 - link
Ripped off? Why?The performance of this six-core is nowhere near the Intel 980X.
This six-core can barely keep up with Intel's faster quadcores.
pjconoso - Tuesday, April 27, 2010 - link
Well, in my opinion the difference in performance versus the price doesn't justify it. They look nice in a bar graph and all like the video encoding performance but in reality, its just a few seconds.fitten - Tuesday, April 27, 2010 - link
It's all "just a few seconds". I'm going to wait for the Intel's consumer-priced hex-cores before I do anything. Right now, AMD needs 50% more cores to even match Intel's parts in heavily threaded code. Running out right now and buying all new kit might be leaving you feeling like "cores on the ground" if Intel comes out with the consumer-priced stuff. ;)Scali - Tuesday, April 27, 2010 - link
Price/performance has always been on an exponential scale.AMD was no different when their Athlon FX were the fastest CPUs around.
Intel doesn't call them Extreme Edition for nothing.
I just get tired of people who go around on the internet telling everyone that Intel only has $1000 CPUs, and therefore Intel is overpriced.
The fastest PC on the market is just $1000, has been like that for decades, regardless of whether it was an AMD or Intel. Just seems to be how the market works.
pjconoso - Tuesday, April 27, 2010 - link
I just get tired of people who go around on the internet telling everyone that Intel only has $1000 CPUs, and therefore Intel is overpriced.Sorry man, but this isn't what I'm implying which is why there is a "in my opinion" on my explanation. It was really just a personal opinion, nothing else.
Scali - Wednesday, April 28, 2010 - link
I think people buying an Extreme Edition CPU know exactly what they're getting themselves into.Those CPUs are never good price/performance, you pay a premium to get the absolute fastest CPU on the market, that's the whole point of the Extreme Edition concept.
Obviously Intel isn't going to offer only one expensive 32nm six-core forever. Perhaps this X6 CPU will trigger Intel to release more 'mainstream' six-cores and other 32nm CPUs.
JGabriel - Tuesday, April 27, 2010 - link
For home use, yes. For professional use, that 48.8 frames/second on the 980x vs. 28.5 on the 1090T, for x264 2nd pass encoding, looks quite justifiable. If that's your business, that'll pay for itself in a couple of weeks..
pjconoso - Tuesday, April 27, 2010 - link
Point taken. I was speaking for home users. ;)pow123 - Wednesday, May 5, 2010 - link
Exactly. A few seconds. I will not pay for an over priced processor for a few seconds. Keep it coming AMD.Lolimaster - Tuesday, April 27, 2010 - link
Watch tom's reviewhttp://www.tomshardware.com/reviews/amd-phenom-ii-...
More justice to the AMD cpu's. Just pass the synthetic intel compiler bugged benchmarks.
haplo602 - Tuesday, April 27, 2010 - link
Actualy the results are what I expected and what was also explained in the review. Not stellar, but very good for the money spent ...I guess I'll buy one of the 4-cores.
Hacp - Tuesday, April 27, 2010 - link
Anand, why do you not try to push the chip on overclocking? Also, why not do an I7 overclocked vs Phenom X6 overclocked performance comparison? Overall, I feel that this review was pretty limited and unenthusiastic for such an exciting product.ymetushe - Tuesday, April 27, 2010 - link
Second this. I was really hoping you'd do some overclocked benchmarks, say at 4GHz, so that we could see clock for clock performance of 6 Thuban cores vs. 8 Bloomfield/Lynfield threads.JGabriel - Tuesday, April 27, 2010 - link
Check the x264 and Cinebench results. Clock for clock, at 2.8 Ghz, two hyper-threaded Lynnfield cores seems to match three Thuban cores - at least for rendering & encoding purposes..
Anand Lal Shimpi - Tuesday, April 27, 2010 - link
I try to provide a look at what sort of headroom you can get out of the chip while feeding it as little voltage as possible. The idea is to keep power consumption at a minimum while increasing performance. I found that the jump from 3.8 to 3.9GHz required quite a bit of additional voltage, while just going to 3.8GHz was basically a non-issue - which to me is more impressive than trying to squeeze another 100 - 200MHz out of the chip.Take care,
Anand
JGabriel - Tuesday, April 27, 2010 - link
I agree with your approach, Anand. As a customer, and for a general review, I'm most interested in what performance I can get without increasing the core voltage and power consumption.Pushing the overclock seems more suited for a separate article, or for sites specializing in overclocking or gaming.
.
GullLars - Tuesday, April 27, 2010 - link
+1 on more overclocking testing. It could be a new article dedicated to investigating OC vs stock against the i5 cpus. Also, i remember seing some results for powerconsumption pr performance on different clocks and voltages, that would be interresting to see for the PII x6.I would also like to see what effect just bumping the FSB up from 200 makes, and how high you get it with stable turbo modes. Finding the max stable FSB with lowered multiplier gives a hint to how far it can clock, and how far the non-BE versions can clock.
I'm considering a a 1090T BE or 1055T to replace my 9850BE (too hot, and won't OC) untill bulldozer chips come out. If the 1055T can take a 20-25% FSB increase and stay stable, that would be my choice.
Anand Lal Shimpi - Tuesday, April 27, 2010 - link
Ask and you shall receive: http://anandtech.com/show/3676/phenom-ii-x6-4ghz-a...jav6454 - Tuesday, April 27, 2010 - link
I miss when Intel was on the run for the market... AMD's 6-core does well, but it'd marginally better than a 4-core from Intel... true you get more cores, but is them small performance worth it? I think not.KaarlisK - Tuesday, April 27, 2010 - link
Overclocking it is definitely interesting, and it would be nice to know whether it's power consumption is much lower or not. It's a lower bin, so it should have slightly higher power consumption at the same clock, but it does have a much lower clock.Anand Lal Shimpi - Tuesday, April 27, 2010 - link
We didn't have an actual Phenom II 1055T, we simply underclocked our 1090T and lowered the turbo core ratios to simulate one. This is why we don't have power consumption results for it either.Take care,
Anand
adrien - Tuesday, April 27, 2010 - link
Hi,I find the results of the 7zip compression benchmark a bit weird. I own a Phenom II X4 955 and I'm currently compressing something (800MB) in a windows7 virtual machine with only two cores (way to go MS!). Still, I'm getting something more than 2MB/s (and this virtual machine has crappy disk I/O, maybe less than 4MB/s sustained). LZMA is slower on data that compresses badly so it's not possible to draw conclusions but overall, it still looks weird.
Which version of 7zip is being used? 4 or 9? And which minor version?
Also, are you using LZMA or LZMA2? IIRC, LZMA can only use 2 threads while LZMA2 can use more (maybe powers of two, not sure).
Also, I think that the 64bit versions are faster. I guess you're using 64bit binaries but it's better to check (running 32bit software on 64bit OSes indeed has a cost), maybe like 5% to 15%.
Anand Lal Shimpi - Tuesday, April 27, 2010 - link
I'm using 7-zip 9.10 beta and LMZA compression in order to compare to previous results, which unfortunately limits us to two threads. In the future we'll start transitioning to LMZA2 to take advantage of the 4+ core CPUs on the market. Today I offer both the benchmark results (max threads) and the compression test (2 threads) to give users an idea of the spectrum of performance.Take care,
Anand
adrien - Wednesday, April 28, 2010 - link
OK, very good. Thanks. :-)That explains why the deca (and actually quad) cores don't benefit much from that. It'd probably deserve a mention on the chart or next to it.
adrien - Wednesday, April 28, 2010 - link
Not decacores but hexacores of course. Too early in the morning I guess.Lolimaster - Tuesday, April 27, 2010 - link
Other reviews that are worth to see.www.bit-tech.net/hardware/cpus/2010/04/27/amd-phenom-ii-x6-1090t-black-edition/4
www.hexus.net/content/item.php?item=24332&page=8
Seems that only anand put Thubies as so so cpu.
Calin - Tuesday, April 27, 2010 - link
From Bit-tech review:Conclusion
Despite being an astonishing £600 cheaper than the exorbitantly-priced Intel Core i7-980X Extreme Edition, the X6 1090T BE still isn’t a very good buy
sciwizam - Tuesday, April 27, 2010 - link
TigerDirect seems to have $50 rebate on the 1055T.http://www.tigerdirect.com/applications/SearchTool...
If Bing Cashback is applicable, there's another 12% off.
sciwizam - Tuesday, April 27, 2010 - link
Correction: Bing Cashback site says 8-10% for TigerDirect.Roland00 - Tuesday, April 27, 2010 - link
There are two ways you can get cashback with Tiger direct1) If you look under cashback stores you get a lower cashback.
2) If you use the search bar and type in a key term you will get a different cashback, with TD the key term is "Tigerdirect" with this trick you will get 12.3% cashback
Roland00 - Tuesday, April 27, 2010 - link
Tigerdirect now has a $50 dollar mail in rebate on the 1090T BE, making the total 249 After Rebatehttp://www.tigerdirect.com/applications/SearchTool...
If you do bing cashback and actually search for tigerdirect you get 12.3% bing cash back.
$299.99-$36.89 (12.3% Bing Cash Back)-$50.00 (Mail in Rebate)=$213.10 after rebates and bing cash back
max347 - Tuesday, April 27, 2010 - link
Usually AT is my go to hw reviews, but I have to say the overclocking section doesnt even look like any effort was put into it. Its a BE part, and you dont review how well it tweaks? Other reviews on the net have this at 4ghz+, and do all the charts with the oc and non oc included.I think most people who by the BE part will not keep the stock cooler. I use a TRUE cu, though I realize mainstream might be something a little less. At least throw a zalman 9xxx on there and see what it can do with that. Benched at 4ghz, the 1090T is competitive right up to the 980x (stock), which I think gives people a little more info on how high they can "reach" with this cpu.
I am in no way an AMD fanboy, but the $300 price tag for this performance seems like a leap for AMD. It has always been in my mind price/performance rather than work/clock cycle or the like.
Anyway, Thanks for the review!
pjconoso - Tuesday, April 27, 2010 - link
In addition to the price, keeping the processor in the same socket is another plus to this processor.ViRGE - Tuesday, April 27, 2010 - link
Are those other reviews using a 32bit OS or a 64bit OS? The last time I checked, the Phenom/Athlon II series was still poorly overclocking in 64bit mode. If it's still happening then any overclocking results would vary wildly depending on the OS used.yankeeDDL - Tuesday, April 27, 2010 - link
I understand (and substantially agree with) the comments and conclusions regarding how the 6-cores Phenom compares against 2 and 4 core CPUs from Intel.I wonder though if these benchmarks are capturing the real benefits of 6 cores.
In my 'daily' use I have several programs running in background: virus-scan, instant messaging, music players, email clients, browsers (that regularly update RSS feeds) and sometimes also torrent clients. These all consume some CPU cycles, obviously.
With all these running in background, I wonder if the difference between a 2-core and a 6-core CPU will be more pronounced.
In other words: does it make sense to compare two multi-core CPU by running a single application at the time (albeit, possibly, a multi-threaded one)?
Calin - Tuesday, April 27, 2010 - link
Unfortunately in my experience, antivirus seems hard-drive limited even on Conroe dual core processors.kmmatney - Tuesday, April 27, 2010 - link
I would agree. Going to an SSD will probably make more difference than adding more cores, when it comes to everyday multitasking.KaarlisK - Tuesday, April 27, 2010 - link
You're lucky, both MSE and AVG usually hit 100% of one core both for my 1.8 and 2.33 GHz Core 2 Duos.Taft12 - Tuesday, April 27, 2010 - link
Frankly, the list of apps you provide would run just fine on a single-core CPU.beginner99 - Tuesday, April 27, 2010 - link
yep, like on my netbook with a z520 atom. AVG + utorrent + FF or chrome no issue (unless of course hd flash movies but that's another story).On my desktop (which is also a pretty old and crappy e4300) i also have seti@home. so basically i'm always at 100% cpu but still feel the hdd is limiting.
Scali - Wednesday, April 28, 2010 - link
Yes it makes sense... things like browsers, IM, email don't take a lot of CPU. They can easily be juggled by the OS on just a single core (or with hardware, using HT).More processes don't necessarily require more cores.
I mean, I am currently running two instances of Visual Studio, a browser with 10 tabs open, Skype, Spark, Notepad++, Outlook and a few other small things in the tray or background, and my dualcore still is at 1% CPU usage, according to Task Manager, and that 1% CPU is Task Manager itself. So why would I want 4 cores, let alone 6?
It really doesn't matter.
eekamouse - Tuesday, April 27, 2010 - link
I think there is something really good that could be tested here, the performance of these chips running virtual machines especially with hypervisor technology..Any possibility of testing these chips running either xen or vmware and seeing how 4 virtual machines react on each and how 6 react ? is the performance stable etc ? The reason I ask this is for the price point if it can run 6 virtual machines all running off of their own core or sharing cores and can maintain a good performance, it would be really worthwhile investing in these for cheap virtualisation servers..
rickcain2320 - Tuesday, April 27, 2010 - link
That's all you need to know. Time to ditch my Q6600.fitten - Tuesday, April 27, 2010 - link
Well... we haven't seen the consumer priced hex-core Intel parts yet... Everybody is comparing this to the high-end Intel parts (i7-980X is a high-end part). I'll wait to see what Intel's response is before removing them from the table.formulav8 - Tuesday, April 27, 2010 - link
Enjoy! the waiting game... I still doubt Intel would lower prices near AMD's 6 core do to the fact that they will be ruining their much more profitable mainstream parts.Jason
mapesdhs - Tuesday, April 27, 2010 - link
Anand,
Thanks for the review! A good read overall, but a couple of niggles...
Why are there no i7 860 results in the results charts for 7-Zip, Batman, Dragon Age and Dawn
of War II? Likewise, why no 1055T data in the power consumption charts?
On the rendering tests page, you said this about the Cinebench test, where the 1090T does well:
"... if you've got a lot of CPU intensive threads there's no replacement for more cores."
But this is contradicted by the earlier comment for the 3dsmax test:
"Not all heavily threaded workloads will show the Phenom II X6 in a good light. Here Intel
maintains the advantage:"
Clearly, generalising here is unwise. For some reason, the Ph2 X6s lose out on the 3dsmax
test; but why? Lack of cache? Insufficient RAM bandwidth? Any ideas?
What kind of overclocking is possible with a good air cooler? Results with the stock cooler
are not really that interesting. I've bought a Thermalright U120E RevC to go with my 860.
Lastly, on the conclusions page, you say:
"Applications like video encoding and offline 3D rendering show the real strengths
of the Phenom II X6."
Not so, IMO. The i7 860 did better for the 3dsmax test (I'd really like to know why),
and although the 1090T was faster for the Cinebench multithreaded test, if you take
into account the higher power consumption of the 1090T, it offers a virtually identical
power/performance ratio under load. One of the reasons I chose an 860 system for
video encoding was because of concerns about power consumption; I had expected
the two extra cores in the 1090T would give a clear advantage in this respect, but it
doesn't (slower for the DivX test anyway, which is the encoding system I use, and a
worse power/performance ratio for the x264 2nd-pass test). In other words, where
the 1090T offers a speed edge, it's small (2 entire extra cores not really helping that
much at all), and its usefulness is negated by greater power consumption under load.
Hmm, can you confirm that all six cores in the 1090T were being used during the
DivX and 2nd-pass x264 tests? It just seems so odd to me that a 6-core chip isn't
doing better. I notice the scaling over the Ph2 X4 965 isn't that great here.
Btw, have you considered using V-ray as a render test? Studios using 3DSMax
often use V-ray as it scales to loads of cores AFAIK - I've been speccing out
32-core systems for a company in order to exploit this. Full details at vray.info.
Also, re pricing, the 1090T appears to be about 238 UKP in the UK (eg. scan.co.uk),
while the 860 is slightly less (234 from Scan), though after some shopping around I
was able to find the 860 for just 205. Clearly, if Intel is even remotely concerned about
the Ph2 X6, all they have to do is a minor price drop on Lynnfield to maintain a useful
price/performance edge.
Lastly, a word about BIOS support for these CPUs. Recently I've been scanning
eBay for AM2+/AM3 mbds, seeing what's available for a friend I'm helping out
with an upgrade; checking each board's specs, it's shocking just how many boards
have not received BIOS updates to support even the Phenom2 X4 and Athlon IIs,
never mind these new X6s. ASUS boards seem especially prone to this issue. So
watchout, always check the CPU support before buying. I'm still peeved that my
existing expensive M2N32 WS Pro can't use a Ph2 X4 (my gf's cheapo Asrock mbd
can). For this reason I went with an Asrock P55 Deluxe for my 860 build as Asrock
seem to be much more on the ball with BIOS updates.
If it were up to me, I would make it compulsory for vendors to include CPU support on
any board that has a theoretically compatible socket... :}
Ian.
PS. Perhaps a more concrete point to make is that the Ph2 x6s now definitely make the
i7 870 utterly pointless given its price (85% more expensive than the 860 for less than
5% higher clock).
Anand Lal Shimpi - Tuesday, April 27, 2010 - link
The 3dsmax test was an outlier if you look at the full set of results in Bench (www.anandtech.com/Bench). It could be a number of problems. Not all applications do well with core counts that aren't powers of 2 for starters. I don't believe we're memory bandwidth limited, but the 6MB L3 might have something to do with it.Like many benchmarks it's very difficult to keep 6 cores fully pegged. The x264 HD test ranges from 50 - 94% CPU utilization across all six cores on the Phenom II X6.
I haven't looked at vray as a benchmark, I'll do some digging :) And I agree completely on the issue of BIOS support. You should see how these things work behind the scenes, motherboard manufacturers are often still fixing BIOS issues on brand new boards even days before a launch like this.
Take care,
Anand
mapesdhs - Tuesday, April 27, 2010 - link
Anand writes:
> The 3dsmax test was an outlier if you look at the full set of results in Bench
Have you tried other render tests such as Maya Mentalray, or Alias Render?
Note that I have a render benchmark which is extremely cache-friendly and scales to
hundreds of threads, might be useful for ruling out the cache as an issue:
http://www.sgidepot.co.uk/c-ray.html
> applications do well with core counts that aren't powers of 2 for starters. ...
Beats me why this should be. Threads are just threads afterall. If it's not scaling well
because it's not a power of 2 then something somewhere has been badly coded
or designed, either the 3dsmax renderer itself, or the chip in some way, or the OS.
Hard to tell I suppose.
> Like many benchmarks it's very difficult to keep 6 cores fully pegged. The x264 HD
> test ranges from 50 - 94% CPU utilization across all six cores on the Phenom II X6.
This confirms my conclusion then, the X6 doesn't offer anything significantly useful
over the 860 in some cases, including DivX encoding. Where it's faster than the 860,
the difference is too small to be useful given the higher power consumption, and in
some cases even on multithreaded loads it's actually slower. I started reading the
article fully expecting the 860 to be trounced for the DivX test; that it was actually
slower was a bit of a shock.
> I haven't looked at vray as a benchmark, I'll do some digging :) ...
The systems I've been speccing out include the SGI Altix XE 340, BOXX 10400, SGI
Altix UV 10, Dell R810, Dell R910, SGI Origin400, SGI Octane III, etc. ie. all dual or
quad socket i7 XEON, 48GB RAM, that sort of thing.
> And I agree completely on the issue of BIOS support. You should see how these things
> work behind the scenes, motherboard manufacturers are often still fixing BIOS issues on
> brand new boards even days before a launch like this.
I dread to think...
I don't know how the vendors are allowed to get away with it. Such policies force people to
ditch their older motherboards even when in theory said board could use a modern CPU.
My ASUS board *should* be able to use the X6, but instead it's stuck at the useless Phenom1.
Asrock's approach couldn't be more different; that they would bother to add Ph2 X4 support
to a board as cheap,old and quirky as the AM2NF3-VSTA is remarkable (socket AM2 with
AGP), though I suspect they won't add X6 support for this board.
Ian.
Jamahl - Tuesday, April 27, 2010 - link
Please take a look at some of the other reviews around the web (like hexus, tomshardware) and ask yourself if you are using benchmarks that massively favour intel.Drazick - Tuesday, April 27, 2010 - link
Wouldn't you say that today most application are highly optimized towards Intel processors?Jamahl - Tuesday, April 27, 2010 - link
Yes I would however intel seem to have hit the jackpot on AT.Just look at the rest of the reviews to see how far out of whack Anand's is with the majority.
Anand Lal Shimpi - Tuesday, April 27, 2010 - link
I don't believe our full results are out of whack with what others have posted. Heavily threaded apps favor the Phenom II X6 (3D rendering, video encoding, 7z compression benchmark), lightly threaded or mixed workloads favor Lynnfield (gaming, everything else). It's more a question of what sort of balance do you strike between these applications. That's why I culled down the results for the actual review so we have a more balanced representation of all of the different potential scenarios.Take care,
Anand
Scali - Thursday, April 29, 2010 - link
What I noticed is that a lot of sites didn't include the i7 860, but only the considerably more expensive, but barely faster 870.I'm glad you used the 860, as it is pretty much in the 1090T's pricerange.
And as your tests show, most of the time the 860 comes out on top... while also having lower power consumption.
kwm - Tuesday, April 27, 2010 - link
I AM AN IT STUDENT AND USE A LOT OF VIRTUALIZATION IN LABS, VIRTUAL PC, HYPERVISOR AND SO FOURTH. WHAT I WOULD REALLY LIKE TO SEE IS A VIRTULIZATION COMPARRISION AND BENCHMARK. WE ARE MOVING SLOWLY IN THAT DIRECTION ANYWAY.LoneWolf15 - Tuesday, April 27, 2010 - link
You must have been gone on the first day of IT class when they explained that typing in allcaps is bad netiquette.kwm - Wednesday, April 28, 2010 - link
did the big bad caps scare you. sorryTaft12 - Tuesday, April 27, 2010 - link
OF COURSE 6 CPU CORES WILL PROVIDE A TANGIBLE BENEFIT TO VIRTUALIZED PLATFORMS!Skiprudder - Tuesday, April 27, 2010 - link
To folks asking for a more detailed overclocking review, I would just say that Anand almost always releases an in-depth OC article on a new CPU architecture anywhere from a day to a week later. I think he usually wants to get the basic info out first, then delve into the nitty gritty for those who OC.silverblue - Tuesday, April 27, 2010 - link
I think Thuban could've been a little better realised;1) Higher uncore speed - whatever happened to the touted 5.2GT/s HT3 link?
2) Triple channel controller - AMD have been using dual channel controllers for the best part of a decade - this HAS to be starving Thuban
3) Keeping the Phenom II's core control system. Phenom I may have been more elegant, but even if Thuban is faster at ramping up the voltages, it'll still result in issues with XP and Vista. So, the targetted audience, at least for Microsoft users, would be Windows 7.
Which reminds me... SysMark is on Vista, an OS known to cause issues with Phenoms. Would this have detrimentally affected the X6's scores, even if two cores are being taxed?
I don't think extra cache would be viable for AMD. The Athlon II X4s aren't far behind equivalent clocked Phenom II X4s even without any L3 cache, plus the added expense and die complexity would've just pushed prices, and temperatures, upwards. Of course, a higher model with 8MB of L3 cache would be nice to see.
It's not really a disappointment to see Thuban fail to topple the entry-level Nehalems. Remember that they're logical 8-thread CPUs and are thus more efficient at keeping their pipelines fed. You can still get a high-end AMD setup for cheaper than the competing Intel setups; just throw some heavily threaded software at it and it'll do very nicely. The new X4s may just give Intel cause to drop prices though.
One final thing - AMD's offerings are known to perform far closer to Intel CPUs when every single bit of eye candy is enabled in games, including AA, and pushing the resolution upwards. It may have been more telling had this been done.
silverblue - Tuesday, April 27, 2010 - link
Ignore the last bit; it wouldn't be a good indication of the power of Thuban.gruffi - Wednesday, April 28, 2010 - link
1) Higher uncore speed means higher power consumption and probably less power efficiency.2) You would need a new platform that makes the current one obsolete. You would also need much more time and money to validate.
3) I actually see no problem.
Sry, but your claims are unrealistic or pointless.
silverblue - Thursday, April 29, 2010 - link
"1) Higher uncore speed means higher power consumption and probably less power efficiency."You could just reduce the clock speeds to compensate, assuming a higher uncore yields a satisfactory performance increase. The i7-920 has an uncore speed of 2.13GHz and Phenom IIs at 2GHz.
"2) You would need a new platform that makes the current one obsolete. You would also need much more time and money to validate."
Fair dos.
"3) I actually see no problem."
The potential for a thread hitting an idle core would still be there as, even with Turbo CORE doing its thing, there would be the potential for three idle cores, however this will be minimised if AMD has decreased the delay needed for a core to ramp back up from 800MHz.
"Sry, but your claims are unrealistic or pointless."
That's fine.
jonup - Tuesday, April 27, 2010 - link
Nice read; well done Anand! Are you planning to do an OC follow up like you,ve done in the past. Also I noticed that on the second "CPU Specification Comparison" chart on the first page "AMD Phenom II X4 965" is included twice.p.s. What's IOMMU? Can someone explain please?
Ryan Smith - Tuesday, April 27, 2010 - link
The short answer is that an IOMMU is a memory mapping unit (MMU) for I/O devices (video cards, network controllers, etc). For most readers of this site, the only time they'd use an IOMMU is when using a virtual machine, as an IOMMU allows the virtualize OS to talk more or less directly to the hardware by translating the virtual addresses to the physical addresses the hardware is using. However it does have other uses.jonup - Tuesday, April 27, 2010 - link
Thanks!Belard - Tuesday, April 27, 2010 - link
While in general the upper end i5/i7 CPUs are a bit faster, there are other costs involved.Basic AMD boards are $80, upper end Cross-fire boards are $100~150, while typical P55 boards are $100~200, and X58s are $200~300.
So, with cost of the intels is about $150~400 in price to get a few seconds performance improvements.
Also, if someone buys a lower-end i3 CPU, they can't upgrade to a top end i7 CPUs because of different CPU sockets. While those who bought an AMD class board a year ago will most likely have the option to upgrade.
Scali - Thursday, April 29, 2010 - link
Who would want to run a six-core CPU on a two-year old board though?You wouldn't have things like USB 3.0, SATA 6 gbps, probably not even PCI-express 2.0 either.
I don't think it's a good idea to upgrade a CPU in the same board in general.
Usually the old board will severely hamper performance, so you're not getting your money's worth for the new CPU. And you'll miss out on the new features.
In all the years that I've been building PCs for myself and for friends/family, I have never found a CPU-upgrade very compelling, and I wouldn't recommend it to anyone.
I wouldn't be surprised if it's better to get a 1055T instead of a 1090T, and spend the extra money on a new board, rather than running the 1090T on the old board. There's a very interesting article in there for Anand I suppose.
stalker27 - Tuesday, April 27, 2010 - link
Tested on Crosshair IV, th 1090T did far better than any i7... proves how much MSI and Gigabyte suck at this thing.hooga - Tuesday, April 27, 2010 - link
Can you please make a note of wheather or not all power saving functions like C"n" Q and C1 for PhenomII is activated?Anand Lal Shimpi - Tuesday, April 27, 2010 - link
All power saving features were enabled on all chips - C1E and CnQ were enabled on the X6. You need to enable these features otherwise Turbo Core doesn't work.Take care,
Anand
KaarlisK - Tuesday, April 27, 2010 - link
With 6 cores, memory bandwidth might be more important. The gain from DDR3 for Phenom II was minimal. Or is L3 cache bandwidth the bottleneck?hooga - Tuesday, April 27, 2010 - link
I meant C1E obviously, and what about overcloking? Just a half heartet attempt? makes me wonder if there is some fanb** no, im not going to say that word, iv always seen you guys as very professional and cant quite believe that, im hopeing that you post the 4,2GHz results very soon. :)silverblue - Tuesday, April 27, 2010 - link
Bit-tech couldn't exceed 3.89GHz with their sample. Guru3d managed 4.1GHz using an OCZ Vendetta air cooler.These things are usually quite variable.
silverblue - Tuesday, April 27, 2010 - link
That's certainly helpful.Considering the intended uses for this type of processor - i.e. heavily threaded applications - it offers more performance for the price as compared to anything Intel is throwing out.
Yes, Intel's products are faster per clock, but they just can't match AMD for performance at a given price point. Guru3D touched upon the point that you could build a 1055T machine for $600; that's $400 less than the 980 on its own. How much would it cost to build a 980 setup, or even a 930 solution in comparison?
fitten - Tuesday, April 27, 2010 - link
Depends on where you buy stuff... Microcenter runs great deals on the i7/930 (around $200), for example.chrnochime - Tuesday, April 27, 2010 - link
Even with the 930 being 200 at MC(I got one nearby so yes I know that), X58 motherboards are still more expensive overall. The cheapest is 160+, and goes to 400. The thing is with x58, just finding a decent board that doesn't have a bunch of weird problems voiced by owners on forums/newegg is very very difficult. One has to either spend around 300 to perhaps have a higher chance of buying a relatively problem-free board, or just pray the board that arrives doesn't have one of the memory slot DOA(seems to plague quite a few mobos).I don't think the situation is anywhere nearly as bad on AM3 side.
Too bad it's not like the old days (5+ years ago) when boards would just WORK without having to eff around to even keep it stable at stock speed.
vectorm12 - Tuesday, April 27, 2010 - link
I'd really like to see how the 1055T stacks up in terms of overclocking ability.Seems to me as if the 1055T overclocked to around 3.4-3.6 GHz would result in a good stepstone from the current Phenom X4 (non black edition) lineup.
Etern205 - Tuesday, April 27, 2010 - link
Just to give a heads up that Maxxon released a new Cinebenchversion R11.
Roland00 - Tuesday, April 27, 2010 - link
If you are building a new system, of course you will want to use DDR3 memory for there is little to no price difference between the two.But some people have older systems. I wouldn't be surprised if people are upgrade from their AMD 5000+ or 6000+ and they have a compatible motherboard and memory. I am just wondering how much of a bottleneck is the DDR2 and will it cause any rational not to do a partial upgrade but instead a full upgrade.
It doesn't have to be all the tests, just a couple.
Taft12 - Wednesday, April 28, 2010 - link
IIRC the tangible performance benefit from DDR3 over DDR2 is in the range of 2%Higher speed but higher latency.
Goopfruit - Tuesday, April 27, 2010 - link
w00t! Go Netburst!xD
LoneWolf15 - Tuesday, April 27, 2010 - link
That my Q9450 clocked to QX9770-spec is still very competitive.I think it will be awhile before I change platforms, especially what with Intel planning socket changes for 2011. By then, we should see some more interesting options and maybe DDR3 will drop in price --I can always dream.
bigboxes - Tuesday, April 27, 2010 - link
When testing your 1366 i7 CPUs you need to use 6gb of ram. Why? Because they can utilize that extra bandwidth for increased performance. You stated that early on, but since 1156 came out your tests have been only with 4gb of ram. Drop 6gb of ram in the 1366 and retest. I'm sure that is what most 1366 owners are using in their systems.Anand Lal Shimpi - Tuesday, April 27, 2010 - link
In order to keep memory size constant we use 4GB on all systems (4x1GB on the LGA-1366 boards). You get full triple channel access for the first 3GB of memory, only the final 1GB is limited to dual channel. It's the only compromise we could come up with short of giving LGA-1366 a memory size advantage.The vast majority of our tests fall in the 2 - 3GB range, so I don't believe we're holding back the Nehalem/Gulftown performance much if at all here.
Take care,
Anand
Makaveli - Tuesday, April 27, 2010 - link
if most of the benchmarks use only 2-3GB's what differenece then does it make to have the i7 with 6gb of ram?SRivera - Wednesday, April 28, 2010 - link
Applications that will use that much memory and memory bandwidth. Off the top of my head, I can only think of moderate-heavy database use. Too many people tend to look over the 1156 platform over to 1366 because of triple channel memory when in reality, even more heavy gamers, you're just never going to fully utilize that much memory or bandwidth.I run a 4GB system, running games and oodles of apps at the same time, I've never seen my memory jump past 3GB or far past it.
So it really comes down to what your use for a system like the i7 9xx & X58 would be if you will really need the triple channel bandwidth and extra memory.
mapesdhs - Wednesday, April 28, 2010 - link
Uncompressed HD editing easily uses more than 4GB RAM. Anyone using an X58 with a
Quadro card for professional work should definitely have 6GB minimum.
Ian.
LoneWolf15 - Tuesday, April 27, 2010 - link
Zero. That's the amount you have contributed to this thread.chrnochime - Tuesday, April 27, 2010 - link
Water closet? Haven't heard that term in YEARS.Peroxyde - Tuesday, April 27, 2010 - link
For a machine used as a light VM Server, is AMD Thuban better than i5 750 ?Taft12 - Wednesday, April 28, 2010 - link
A light VM server should probably use your old PC in the corner gathering dust.ant_ - Tuesday, April 27, 2010 - link
I was hoping to see some benchmarks in Battlefield Bad Company 2. I thought Anandtech had added it to the gaming tests. We know the game scales well using a quad core vs a dual. I was curious to see the difference between 4 vs 6 cores.toolonglyf - Tuesday, April 27, 2010 - link
ya I'm a bit disappointed not seeing it there... I think it would have shown something interestingKranZ - Tuesday, April 27, 2010 - link
I'd be curious to see how this stands up in the VM tests you did earlier this year. At face value, it seems VMs = more threads and this proc would be of value.Crypticone - Tuesday, April 27, 2010 - link
I noticed the WOW benchmarks are missing from this CPU. Any chance of getting them added to the gaming page?HangFire - Tuesday, April 27, 2010 - link
Thanks for including the old P4EE, the E6850, and even the QX9770. I think a Q6600 or a Q9550 makes more sense than a QX9770, as there will be many, many more readers looking to upgrade from those processors, but at least I can extrapolate from there. They all face the issue of having to replace DDR2 with DDR3.Which gets me to the point of my comment, that little to nothing is made of the situation of a user with an older AM2+ or early AM3 motherboard that is deciding whether to upgrade the CPU (easy drop in), M/B+RAM+CPU, or pass. Again, many, many more AMD using readers in that position than QX9770 using readers. Both AM2+ users and Core 2 users have the same dilemma, when they upgrade, they have to replace DDR2 with DDR3 as well. That makes upgrades not only expensive, but exercises in future-proofing, since these big expenditures need to be justified by long life and upgradability.
In that vein, I would have liked to have seen an AM2+ motherboard in the reviews to verify compatibility and identify any performance differences. Previous reviews identified little difference between DDR2 versus 3 but now we have 890 and hex core to consider in the mix.
Many of us upgrade step-wise, that is, get an AM3 M/B and DDR3 today and use our old formerly-AM2+ CPU for a while, and then look to upgrade to Hex core in A3. Or maybe the other way around. We look to articles like this for guidance, but there seems to be an assumption that everyone reading is trying to decide whether to upgrade from a QX9770 to a 1090T, which is not only unlikely, but silly and pointless.
The few QX9770 users are going to fall into two camps. Those who simply buy the next Intel EE, and those who will never, never, ever blow their money on that kind of price/performance again.
In short, I think Anandtech should start considering more the common case of a user looking to jump generations, as opposed to silly non-questions like comparing 790 to 890 chipsets. If I bought it last year, unless some huge performance jump has been made, its going to stay in service for another two. For those users, the questions are, what CPU is it going to be running at that time? Should I buy that CPU now or in the Autumn?
The most keenly interested users are the ones looking to either upgrade their old AM2+ M/B "for another year", or those stuck with last generation hardware who are trying to figure out where and when to go from there. The latter are unlikely to be interested in these hex core CPU's, but they need to read the article to find out.
HangFire - Tuesday, April 27, 2010 - link
Dang, that was wordy. Where's the edit feature?mapesdhs - Wednesday, April 28, 2010 - link
HangFire writes:
> Which gets me to the point of my comment, that little to nothing is
> made of the situation of a user with an older AM2+ or early AM3
> motherboard that is deciding whether to upgrade the CPU (easy drop in),
> M/B+RAM+CPU, or pass. ...
Some good points there. Indeed, users with older mbds that could
theoretically use these newer CPUs will find it hard to locate useful
info on whether an upgrade is worthwhile or if their older base hardware
is holding back performance.
This is made more complicated by two issues:
a) There are AM3 motherboards that use DDR2 RAM.
b) Lots of AM2 boards have not received BIOS updates to support even
the quad-core version of the Phenom2, never mind the new 6-core chips.
Way back when the Ph2 came out, I had hoped to write a review of the
chip by upgrading my existing AM2 6000+ 3.25GHz system (4GB DDR2/800
RAM). It would have been an interesting comparision, but alas ASUS
hasn't bothered to release a BIOS update to support anything newer than
the useless Phenom1, so I'm stuck. In the end I just bought a new mbd
from a vendor which seems to care much more about BIOS updates (Asrock)
and an i7 860.
> have the same dilemma, when they upgrade, they have to replace DDR2
> with DDR3 as well. That makes upgrades not only expensive, but
The other alternative is to buy one of the mbds that have an AM3 socket
but still use DDR2 RAM, allowing one to retain one's existing RAM kits.
For example, the Asus M4A77D is a reasonably priced board, Socket AM3,
supports Ph2 X4, uses DDR2 RAM:
http://www.scan.co.uk/Search.aspx?q=LN29742
But again the condumdrum: will ASUS update the BIOS to support the Ph2 X6?
Anyone's guess. My board cost 100% more and they didn't bother even adding
support for the Ph2 X4 (btw, I did ask ASUS about this, their response was, it's
an old board, who cares). By contrast, Asrock has already added X6 support
to many of its older/cheaper boards. I asked Asrock for upgrade advice and
they were very helpful.
> differences. Previous reviews identified little difference between DDR2
> versus 3 but now we have 890 and hex core to consider in the mix.
Good point, a X6 in a DDR2 setup vs. a DDR3 setup where the RAM and all
cores are being hammered (eg. video conversion, animation rendering, etc.)
would certainly be interesting.
> Many of us upgrade step-wise, that is, get an AM3 M/B and DDR3 today
> and use our old formerly-AM2+ CPU for a while, and then look to upgrade
I agree, I did something similar. Originally had an AM2 6000+ AGP setup,
switched to a PCIe board so I could keep the CPU and RAM, replaced the
gfx (X1950Pro AGP) with an 8800GT. Naturally I'd been hoping to switch
the CPU to a Ph2 X4 later, but no such luck.
Anyone know what Gigabyte is like with their BIOS updates? I *almost*
bought a Gigabyte board for my new build, but went with Asrock after
deciding the latter's slot spacing was better suited for my needs (I
plan on fitting a PCIe RAID card, among other things).
Btw, Gigabyte also has an AM3 board which uses DDR2 RAM, the MA7770-UD3:
http://www.scan.co.uk/Products/Gigabyte-GA-MA770-U...
Oh, goes without saying both the above boards are not enthusiast boards,
and do not support SLI/CF.
Ian.
HangFire - Friday, April 30, 2010 - link
"Anyone know what Gigabyte is like with their BIOS updates? I *almost*bought a Gigabyte board for my new build, but went with Asrock after
deciding the latter's slot spacing was better suited for my needs (I
plan on fitting a PCIe RAID card, among other things)."
I have been using Gigabyte for the past 6 years and found them awesome in their ability to upgrade both enthusiast and non-enthusiast boards' BIOS for the latest processors.
ASRock is in a class by itself in supporting the step-wise upgrader, but (until recently) usually performance took a back seat.
SRivera - Sunday, May 2, 2010 - link
ASRock also offers only 1 year warranty and they don't even do it through themselves, they ask you go to back to w/e brick & mortar shop or online store (they call all of this the "authorized retailer/distributor") and have them handle getting you a replacement. Which most will only do within 30-90 days of purchase.Unless you really really want to save money or mix and match parts that many ASRock boards let you do, I would never recommend an ASRock board over an ASUS or Gigabyte one.
SRivera - Sunday, May 2, 2010 - link
ASRock also offers only 1 year warranty and they don't even do it through themselves, they ask you go to back to w/e brick & mortar shop or online store (they call all of this the "authorized retailer/distributor") and have them handle getting you a replacement. Which most will only do within 30-90 days of purchase.Unless you really really want to save money or mix and match parts that many ASRock boards let you do, I would never recommend an ASRock board over an ASUS or Gigabyte one.
ThumpingOtter - Tuesday, April 27, 2010 - link
"Price point" does not make you sound cool, it just adds an extra word for no reason. Just say price, man. We understand how it varies. Toss it atop the garbage heap of unecessary buzz terms like "solutions" that make computer industry talking heads look like try-hards.SRivera - Wednesday, April 28, 2010 - link
Hi,Could you post the exact model numbers of the Corsair memory kits you used? It'd be much appreciated.
Great review very informative. Intel still holds the performance crown but it's nice to see AMD keeps staying right on Intel's tail, keeping the pressure one for more innovations from both camps.
Mr Bill - Wednesday, April 28, 2010 - link
I see no attempt to run the memory and the NB a bit quicker. Lets not forget that the Black Edition is made to run comfortably with DDR3@1600 MHz and the NB@2400 MHz. It gives a significant boost to all operations.My system:
AMD Phenom II X4 955 BE DDR3@1600 NB@2400 | MSI 790FX-GD70 | 4 x 2GB = 8GB OCZ Platinum DDR3 | Intel X25-M G2 | Asus Radeon HD 4770 | BenQ FP241VW 24" LCD Monitor | Antec Neo HE 550 | Cooler Master Hyper 212 Plus | WinXP64 SP2
defsol - Thursday, April 29, 2010 - link
So the AMD 6 core is not as good as Intel's line when it comes to one benchmark at a time. But what about running multiple instances of programs and assigning it certain cores or just letting it run freely. For example, I would like to run 6 instances of DVD shrink to rip and burn a DVD to an ISO or run 6 instances of Handbrake and encode 6 different video files at the same time rather than batching them. Is that possible? Oh and Virtual Box or VMware 3 OS with dual core and testing it's performance. Can someone do that please or send me a CPU/Mobo and I will test it out. Thanks.silverblue - Thursday, April 29, 2010 - link
I agree. If it's a struggle to utilise all six cores at 100%, just add another program to the mix. This may just prove once and for all if a physical Stars core can beat a logical i- core, and thus whether AMD were right to launch Thuban in the first place.Scali - Friday, April 30, 2010 - link
I'll say a few things to that...A physical Stars core actually has to beat TWO logical i-cores. After all, we have 6 Stars cores vs 8 logical i-cores.
So if we were to say that the 4 physical cores on both are equal (which they're not, because the i-cores have an advantage), that leaves 2 physical cores against 4 logical cores.
Another thing is that if you have to work hard to set up a multitasking benchmark that shows Thuban in a favourable light, doesn't that already prove the opposite of what you are trying to achieve?
I mean, how realistic is it for a consumer processor to set up Virtual Box/VMWare benchmarks? Doesn't that belong in the server reviews (where as I recall, AMD's 6-cores couldn't beat Intel's 8 logical cores either in virtualization benchmarks)?
Virtualization is not something that a consumer processor needs to be particularly good at, I would say. Gaming, video processing, photo editing. Now those are things that consumers/end-users will be doing.
wyvernknight - Thursday, April 29, 2010 - link
@mapesdhsTheres no such thing as an AM3 board with DDR2. Only an AM2+ board with DDR2 that has AM3 support. The MA770-UD3 you gave as an example is an AM2+ board with AM3 compatibility. "Support for Socket AM3 / AM2+ / AM2 processors". AM3 boards do not have support for AM2+ and AM2 processors.
mapesdhs - Thursday, April 29, 2010 - link
Strange then that the specs pages specifically describe the sockets as being AM3.
Ian.
Skyflitter - Thursday, April 29, 2010 - link
Could someone please tell me the difference between the Phenom II X6 1090T & 1055T.I would like to put one of these new chips into my Gigabyte DDR2 MB but the Gigabyte web site says my board only supports the 1035T and the 1055T chips. My board is rated @ 140 W. ( GA-MA770-UD3 )
I am currently running a Athlon 64 x2 6400+ ( 3.4Ghz ) and I do not want to loose to much clock speed by going with 1055T ( 2.8 Ghz ).
Do all the new Phenom II X6 support DDR2?
cutterjohn - Friday, April 30, 2010 - link
I'm waiting for them to cough up a new arch that delivers MUCH better per-core performance.There is just no value proposition with their 6 core CPU that mostly matches a 5 core i7 920 which can be had for a roughly similar pricepoint, i.e. i7 930 $199 @ MicroCenter.
Either way unless I win the giveaway :D, I'm now planning at least until next year to upgrade the desktop to see how Sandy Bridge comes out and IF AMD manages to get out their new CPU. I figure that I may as well wait now for the next sockets LGA2011 for Intel, and what I'm sure will be a new one for AMD with their new CPU. As an added bonus I'll be skipping the 1st generation of DX11 hw, as new architectures to support new APIs DX11/OGL4 tend to not be quite the best optimized or robust, especially apparently in nVidia's case this time. (Although AMD had an easier time of it as they made few changes from R7XX to R8XX as is usual for them. AMD need to really start spending some cash on R&D if they wish to remain relevant.)
silverblue - Friday, April 30, 2010 - link
The true point of the X6 is heavy multi-tasking. I'd love to see a real stress test thrown at these to show what they can do, and thus validate their existence.pow123 - Wednesday, May 5, 2010 - link
You would have to be insane to pay $1000 for a chip that may be good for gaming. at $199 with slightly lower performance its a no brainer. When I build a system, I don't care if the frame rates etc is 10 to 15% better. Who cares ; the chip is fast and I have not problems playing high end games. I have no special setup and it does everything that my friends I7 can do. Good for me I get more pc for the buck . Go ahead and go broke buying just a motherboard and cpu when I can get a modern motherboard a cpu, 6gigs of ddr3 1600, a 1tb hd and a dvdrw. More for me.spda242 - Sunday, May 2, 2010 - link
I would really like to have seen a World of Warcraft test with there CPUs like you did with the Intel 6-core.It would be interesting to see if WoW can use all Core's and to what performance.
hajialibaig - Wednesday, May 5, 2010 - link
Not sure why there is no Power vs. Performance vs. Price comparison of the different processors. As for the performance, it could be anything that you want, such as Gaming Performance or Video Encoding.Such a comparison should be interesting, since you may as well pay back the higher initial price via power savings.
Viditor - Thursday, May 6, 2010 - link
There appears to be a disparity...In the forums, the guys who have the 1055Ts are getting 4.1GHz on 1.42v, and are doing a lot of very stable benching. It appears to be more of the rule than the exception...could you have gotten either a bad board or a bad chip?
http://forums.anandtech.com/showthread.php?t=20698...
mapesdhs - Tuesday, May 11, 2010 - link
Has anyone here bought a 1090T? How did you find it? Particularly interested in those using
their systems for video encoding and/or animation rendering.
Ian.
xXChronoXx - Sunday, June 20, 2010 - link
I have been jumping back and forth on this issue while trying to decide if I should go with AMD or Intel for a gaming machine. Until the point where I read this article I was almost completely swayed to the side of Intel however when reading the specs of gaming performance I was somewhat surprised by just how close AMDs chip actually came on most points. I understand that the 1090T got the tar kicked out of it a lot during this comparison. However, I have to consider the fact that upgrading to a Quad Core now will almost certainly result in me having to change motherboards down the road when ( not if ) Intel decides that they no longer want to support their chipset. It makes the idea of buying a second gen chip seem like a bad choice even if it has a slightly higher performance.The only comparison that made me cringe was Dawn of War II and possibly Dragon Age: Origins (but only because the core i7 980x had an impressive 170fps). With the exception of Dawn of War II these framerates seem so high (and in most cases close together ) that I can't really imagine there being a noticeable difference. So my question would be this: If you're running two or more high end graphics cards in CrossfireX on the 1090T are you really going to notice any difference on a consistent basis compared to a Quad Core or are we just splitting hairs at this point?
lisk - Monday, July 5, 2010 - link
I'm a chess player. I use deep rybka 4 SSE42/SSE4A based engine.And I find 1090T is faster than i7-930\920\870\860. And i7-965/975/980 is too expensive, so 1090T is my best choice here.
jsimonetti - Wednesday, July 7, 2010 - link
Do you know if the AMD Phenom II X6 1090T will fit in my m3n-ht deluxe?papalazaru - Wednesday, July 14, 2010 - link
Price of a 1055T platform : £350Price of a i7 860 platform : £550
Price of a i5 750 platform : £450
Also there is talk of the AM3 support for the new AMD processors (Bulldozer, 8 core 28nm).
Personally, I have no complain against my 1055T. Runs very cool and quiet (Corsair H-50), and I have good perforamnce coupled with a HD5850, copes with anything. It's a decent mid-high spec system.
The Intel / Nvidia board is also an excellent gaming platform, especially with the arrival of the new GTX 460, that can compete directly with the HD5850 at a lower price point (which will no doubt be reduced at some point).
kznny - Thursday, August 19, 2010 - link
I was think about updating my E6850 so I could play better games. Looking at your review, I clearly see the chip is not the bottleneck but the video cards are, I can go with a new SLI configuration and really rock. Saved me a lot of money - thank you!pacmankiller - Monday, September 13, 2010 - link
get your i5 or i7 to 4.5 hahahahaha put your 3d mark up the amd 1090t is the second best cpu hands down..................................................................................Alaskagram - Thursday, November 11, 2010 - link
I purchased an ASUS CG1330 with a Phenom II X6 1035T/2.6 GHZ.No where do they mention the turbo function.Is this something I can turn on or is an automatic feature?I bought this after having a Gateway 6840 which over 10 days died.When it requested that I insert the restore disk I realized that the optical drive did not have a physical eject button,consequently I could not insert the restore disc,can you say"catch 22".I bought it at cosco who does not offer on site tech help,so I had to return it,the last one of course ,and go to Best Buy.Lessons learned,support,support!kenupcmac - Wednesday, December 1, 2010 - link
should i get amd X6 1055T or intel i7 9XX for 3dmax and CADi do alot of vray and lighting effect
kenupcmac - Wednesday, December 1, 2010 - link
so now amd x6 is better for 3dmax compare to intel i7?Wabid Wabit - Tuesday, December 28, 2010 - link
Please check the site out, it is not a Intel Fanboy site or a AMD Fanboy site, but has the info you need, this page was an old post but the site has the info from then and now and it looks like - wait for it - wait for it - Intel just plain kicks ass - and all us Computer Geeks know that.http://www.cpubenchmark.net/high_end_cpus.html
azizul.hoque - Monday, January 17, 2011 - link
Hi,Can I use this processor for 3D studio MAX?
tipoo - Saturday, May 7, 2011 - link
Of course you can use it.superccs - Saturday, June 11, 2011 - link
I have disabled the middle 2 cores on my 1055T (240Mhz FSB auto voltage, CnQ, and Turbo enabled) and it works quite well at bringing down temps when gaming. When turbo core is active the voltage of all cores goes up to 1.475, so disabling the cores saves power and temps.This works well in the summer, during the winter all heaters are enabled.
rustamveer - Friday, July 29, 2011 - link
sir i wanted to upgrade my pc with amd phenom2 1090t and i' m new to amd processors.but sir please suggest me which motherboard i will choose i don't know much abt motherboard????
please also tell me the price rate in india of phenom2 1090t and motherboard????????
i will be very thankful to you....waiting for you reply!!!!!!!
archangel2003 - Saturday, February 4, 2012 - link
Sounds a lot like the cycle magazines touting one bike having 1.5 HP more than the other, but really, how much riding is done nearly bouncing off the rev limiter?Same thing with these chips.
How often are you going to experience the slight difference between these top of the line "at it's limits" Intel chip compared to the AMD at 1/3 the price?
I think the point of the article was that the huge cost savings of an AMD offsets the slight difference in performance.
BTW, all my computers have Intel only because that ia what they had in them, and if in the future I needed to make a choice, I would need better info like this article offeres.
I could have went with the AMD 6 core for less than my i7 IntelQuad core cost!
Somebody23 - Tuesday, September 17, 2013 - link
I have managed to push my 1090T to 4.2ghz all cores. it was mostly stable on benchmark.Downgraded 100mhz to 4.1ghz. It's stable at 4.1ghz computer doesnt bluescreen in 3 hours stability test.