Benchmarks clearly have their place, but the danger is always in the interpretation. Some folks will always look for that simple, single number to define "overall system performance", but how can you ever fairly create such a metric? Maybe it's better to end the single-metric game and just list a few simple subscores that better represent several common usage scenarios. I don't know what the best categories would be, but they might include Gaming, Video and Content Creation, Office and Productivity, and Statistics and Computation.
And then you'd have to resist the urge to somehow weight these individual categories into a somewhat meaningless single metric...
I absolutely agree with the idea of using several subscores.
I've long stopped look at "overall" PCMarks, Sysmarks because how your computer performs no doubt depends on what you're doing.
However, I am more on AMD's side regarding this dispute. The significance of the GPU has certainly increased in the past decade for the general consumer and Sysmark should adjust their weights accordingly.
"However, I am more on AMD's side regarding this dispute. The significance of the GPU has certainly increased in the past decade for the general consumer and Sysmark should adjust their weights accordingly. "
But, the software and compilers aren't there yet. Now that AMD has a more forward thinking hardware, they want the benchmarking companies to jump on board right away with their design?? Or they'll take their ball and go home.
What if Intel dropped out back in the P4 days because software/benchmarks weren't taking full advantage of SSE. Eventually they did, and P4 dominated Athon XP by the end.
At the end the P4 was fighting against the Athlon 64. There was a time when there was a period of the P4 being better than the AthlonXP chips, but I don't think it was that drastic. But the Athlon 64 came out and suddenly the tables were turned all the way until Core 2.
Sorry, but in general - the P4 was POS CPU since it first farted out of intel's FABs. Even the P3 had a higher IPCs.
When the first P4s came out (Socket 423 1.4~1.8GHz) were battling out with PIII and AMD Athlon. Not the AMD64 or AMD-XP class CPUs.
So clock for clock, the Athlon 1.4s were up there or better than P4s (same with PIII, up to a point). Even thou P4s had double the memory bandwidth.
Then AMD-XP CPUs were a good series against the first P4's 478s. Then came the AMD64s and the trend continued... 2.0Ghz AMD64s that were equal to 3.2+Ghz P4s.
The area that P4 did well was rendering 3D graphics or video - liner work. Gaming, everyday work... they were slow. At one of my jobs, we have some 3.0Ghz P4s that feel so sluggish compared to any AMDs of that era.
Intels anti-competition business practice kept AMD down (AMD with more money would have more money for R&D).
Intel was smart when they made their Centrino platform with the Pentium-M, not based off of P4's Netburst. Its very much a PIII / AMD design layout. Intel realized that Netburst was crap and the Megahertz race didn't amount to squat anymore.
When Core2 came out (based off the Pentium-M design), AMD had just reached 40% desktop CPU market share. Walk into an Office Depot/Staples - you would see 8 out of 10 PCs with AMD CPUs.
Yea, and like he said, There was a period that the P4 (northwood B and C) out performed the Athlon XP. This was a period before the Athlon 64s came out.
But Even an old Athlon 64 3800+X2 (dual core) is fine for 90% of the population, a 6 year old chip. When the 3800X2 came out, there was not a 6 year old chip that could say the same. Boy times have changed.
Not per clock though. Athlons were still faster for most things. It's just that we were getting 2.6 - 2.8GHz P4s and the Athlon XPs were still lurking around 2GHz.
Yea, when AMD Opteron came out in April 2003 around the same time as Northwood C, it was based on the B3 stepping which had more errata and was limited to DDR333, which is why they waited until C0 stepping that fixed these problems to release the Athlon 64 even though releasing it earlier would have killed the Northwood C immediately.
In terms of productivity, the GPU isn't really integral in most applications. The entire adobe suite and office suite will run better on a Sandy Bridge i5 with an Intel GPU than the Llano's. I really only see the GPU truly affecting performance in 3D applications, and even then you would probably want a production class GPU instead of the 6000 series on the Llano processors. I see the Llano processors as a way for gamers to get a solid, mid-grade gaming laptop for under $800 but other than that, they just aren't better or even as good as Intel's offerings.
ONLINE STORE : www styl eown com ------------------------------------------------------------------------------------------------- 1) More pictures available on our website -- 3) Perfect quality, small order accepted . 4) 100% safe door to door delivery, within 5 - 7 days air express for small orders . 5) We have lots of jerseys in stock ------------------------------------------------------------------------------------------------- www styl eown com ------------------------------------------------------------------------------------------------- 6) Letters and number are sewn on b2cshop body, 100% embroidery 7) Size: .48, 50, 52, 54, 56, 60 8) Delivery by UPS, DHL, EMS door to door 9) Delivery in 5 - 7 days ------------------------------------------------------------------------------------------------- NFL,NBA,MLB all are 18usd!!!!
Benches exist for just CPU or GPU testing if that's what people want. What most consumers want to see is real world system performance. As noted different users have different needs and priorities so using the most approproiate benchmarks is important if you desire objective, relevant test data.
For years Intel has "influenced" benches to try and highlight their CPU core speeds. When AMD proved that core speed was not as important as system performance then Intel tried to get bench results weighted in their favor. This is nothing new as GPU makers have also done this for years. What is unacceptable IMO is how unscrupulous some bench producers can be. We all know money talks and Intel spends plenty to get what they want. As long as Intel can buy the bench test designs they desire, consumers will continue to be misled.
To say they're being "misled" is in itself a misleading term. What Intel is doing is working to get benchmarks that show their CPUs are faster--and they are (at least since P4 was retired and C2D showed up). AMD has a point that GPUs/APUs can provide a better experience in some cases, but there are a lot of things that still benefit from a good CPU. Actually, I reference this in the text, but for many users replacing an HDD with an SSD would be better than a faster CPU (since most users run a bloated pig of an Internet Security Suite).
If you do video and imaging work, Intel's current CPUs are usually superior. If you just run basic office/Internet tasks, Intel is technically faster but it's mostly meaningless as both sides are "fast enough". Even for video playback, AMD isn't vastly ahead of Intel--SNB can handle common video files perfectly well, and it's only the more esoteric stuff where they falter (and AMD isn't perfect there either). Where AMD is really ahead is in graphics, specifically when it comes to gaming, but you can easily fix that deficiency by adding a $50 GPU to an Intel system. So I guess where AMD is really ahead is in pricing for a certain minimum level of performance.
Is the Intel quad core really faster than the AMD hex core in video work, dollar for dollar? I'm running Corel MovieFactory 7, and the six cores seem to do rather well...
Jarred, I think the point is that with an AMD Fusion chip, most users would get the benefit of some pretty solid GPU acceleration compared to a competing part, without the need for a dedicated GPU. In case you didn't notice, most people don't build their own systems. They use OEM boxes, and even adding an entry-level dGPU could easily add $100 to the system cost - assuming they don't force you to bundle a faster processor or upgrade to a fancier model to get a dGPU in the first place.
So for things that people use every day, like IE9/Chrome/FF4, Flash, etc, that all benefit from hardware acceleration, I think it's a little unfair to not factor that in properly. GPU acceleration isn't fading away, it's growing. If SYSmark says that a $100 Intel chip is 50% faster than a competing AMD product at the same price, when the AMD product in actual use is as good if not better, I can see AMD's problem.
As for Nvidia, their angle is a little different I think, but no less valid. If you test a system with Intel's integrated HD3000, and then test it with a "costly" (OEM) add-in Nvidia GPU, SYSmark may tell people that it's not significantly better. "Don't bother, it's not worth the extra money." That might not sit well with Nvidia, and I can't really blame them.
Oh, and that's just for desktops. How much to "add a $50 GPU" to an Intel laptop, you think there, partner?
If you want a laptop with dGPU, it's about $50 to $100 more than a laptop without (depending on dGPU). Considering you can already get Optimus with GT 525M level graphics and better CPU performance for around $700, well, I've already covered this plenty in the Llano review. Plus you're discounting how well Intel's HD 3000 handles things; sure, Llano is faster, but on average it's about 30-50% in games, and for things like viewing YouTube or surfing the web, it's really no faster (and sometimes slower). Outside of gaming, name me one area where an Intel Sandy Bridge system truly struggles. I've tested several, and I can help out: they don't struggle. With anything. Neither does Llano, but again outside of gaming Intel is still faster.
You beat me to it. But not just DX11 games. It also lacks CS 5.0. So for HD3000 you'd have to fall back to what, 4.1? Above and beyond that, performance using CS or OpenCL will not be as good as it is with Llano or a dGPU.
But according to SYSmark, that isn't relevant. That's their beef with BAPCo. I mean really, IE8? FF 3.6? Gee I wonder why they chose pre-hardware-acclerated browsers... oh hello there Intel-logo briefcase stuffed full of cash!
Exactly my point. What do you mean by "overall system performance"? It's obvious that all modern processors can handle basic word processing and internet usage. But there are also lots of forum posts from sad 14 year-olds wondering why their brand new laptop can't run any games despite the fact that the salesmen said it was a really fast computer.
How can you tell if your single number is high because the processor is ridiculously fast, but the GPU still can't play games. Or if the GPU is fast, but the processor doesn't have enough muscle for running simulations? And if both systems get the same overall score, but are so clearly different in capabilities, what's the point of your single score for overall system performance?
To finish the thought: In a world where laptops have several scores next to them, little johnny could pay attention to the one that has a high score for gaming and use that to make a semi-informed decision for his needs. Sure the gaming tests themselves will be subject to debate, but less so that the "overall system" score.
In what situations are AMD CPUs faster than same generation Intel chips?
I can only think of 1 case-- 3d applications.
We can go on all day about performance/frequency ratios, and performance/watt ratios, but only one thing matters:
And I'm paraphrasing here: "virtually all current CPUs will be more than adequate for 90% of users (or usage scenarios-- whatever it was)."
Intel is flat out faster at that 90%.
And if you fall within the other 10%, and that 10% matters to you at all, you need to be looking at discrete solutions.
I could say this: AMD GOT ROMPED at the 90% that matters to most people.
Now, I could play devils advocate and say that Intel is in fact muscling it's way through to better bench scores... But I could just as easily say that AMD VIA and Nvidia made this decision because they were lacking on that 90%.
It boils down to this: amd is going to do what's in it's best interests...and if that means crying wolf and forming a gang with the people who really care about iGPUs to boycott a benchmark, they're gonna do it.
They have a huge interest in seeing that integrated graphics scores matter in benchmarks...just like intel has an interest in seeing that it's strengths score well.
If you think this is about justice, your sadly mistaken. It's pure business, plain and simple.
If you’re been reading AnandTech for any length of time, you’ll know that we place a lot more weight on real-world benchmarks rather than synthetic tests
Ha ha. I often think the same thing when I see the SSD reviews (in a good natured way). Anand and company have to work pretty hard to find tests that show a large spread in SSD scores. And they have to do things that most users would never do.
But the reviewers often mention that once you've gone SSD, the differences are not huge. Though maybe those disclaimers need to appear more often. And they might advise folks that at at given price point, it may be better to step-up in capacity (from 90GB vs 120GB) than to step-up in SSD performance (from really-fast to really-really-fast). But Anandtech readers are hopefully smart enough to figure that out themselves (or by looking at the PCMark Vantage scores that show only small differences from the slowest to the fastest SSDs).
For anyone to use any BABPCo benchmark as an unbiased performance metric is a confession of pure bias and it's been know for years. It all comes down to who yells the loudest, and in this case just like many others, intel has had it's ground troops out trolling forums and tech sites to obscure the message. I read this blog as a precurser to Anandtech full BAPCo endorsement.
Here's a little nugget worth considering:
"AMD also points out that the president of BAPCo happens to be the head of performance benchmarking at Intel. "
The President of BAPCo is not the head of performance benchmarking at Intel. The President works for Dell. You, just like AMD don't have your facts straight.
Shervin Kheradpir is the President of BAPCo and also Director of performance benchmarking at intel. And Shervin Kheradpir also helped Futuremark developing PCMark.
Did some random googling and I couldn't find anything that links someone called "Gary Lusk" with BAPCo, while articles that say Shervin Kheradpir being president of BAPCo were plenty - some were even BAPCo's own press releases.
Don't know who's right here, just saying what a google search came out with. It sounds like Shervin Kheradpir is the real deal but who knows, there might be a secret conspiracy.
For those interested, specific phrases searched were "BAPCo president Shervin Kheradpir" and "BAPCo president Gary Lusk". Only the first page of search results were examined.
At the end of 2010, the situation hadn't improved, I doubt it currently is.
The only gripe I have with how anandtech reviews are structured, is that sysmark is usually the first result someone sees, and also the breakdown of those results are given the same weight as separate benchmarks in some reviews and in "bench".
SM2007 may be first, but we also don't pay that much attention to it in the text, particularly of late. I know Anand has noted in the past that SM07 doesn't tend to scale with CPU performance or other factors very well, in part because the constituent applications are now about six years old (SM07 mostly uses apps from 2005 or so). I personally don't like it because it's a serious pain in the rear to run -- you have to be running a vanilla Windows Vista install (no SP1 or most of the patches, and no other applications). I can't imagine SM12 is going to be any better.
PCMark is a ton easier to run, and IMO the resulting performance is about as relevant... which is not to say it's particularly useful. PCMark Vantage and 7 are totally skewed by SSD performance, and even then the individual scores are still skewed by various items. I've started including the full set of PCM7 results for laptops, but Intel's Quick Sync is used in the Computation score and ends up being twice as fast as anything else. The remaining six tests all have storage elements so that using an SSD over an HDD can improve the scores by 50%. Ugh.
But then, tests like Cinebench and x264 encoding aren't necessarily great either. They're heavily-threaded, but I'd say 99.9% of computer users will never use a full 3D rendering application like Lightwave or 3ds or whatever, and probably 99% will never do video encoding other than to convert their iPhone/smartphone clip into a YouTube upload. So how do we measure if a PC is actually useful for most people? Well, we can't really, other than to just give a subjective impression of the performance. In that case, as I note in the news post, Anything faster than Llano is usually sufficient for most people.
In the end, AnandTech focuses on the enthusiast market where things like raw CPU, GPU, SSD, etc. speed matter to people. We assume that our readership is well enough informed to know that just because an Intel i7-980X places at the top of most CPU performance results in Bench, it's not a CPU we actually recommend for the vast majority of people. All things being equal, yes, I'd like a faster CPU, but my "old" Core i7 Bloomfield is plenty for me, and even my 3.2GHz Core 2 Quad Kentsfield that I use for my work PC is plenty fast. And my C2Q with an HD 5670 is still faster than Llano, though it uses more power I'm sure. Heh.
I realize one can build a case against any benchmark, but if it only runs on a very specific configuration nobody sane would use, I don't see the point of running it at all. There are so many things that could influence the relative standings: drivers, bugfixes, scheduling optimizations, library/compiler updates, ...
While you're certainly right with regards to Anandtech readership, sometimes it's hard to use the site as a reference when trying to inform someone making a buying decision. In the bench example, most would look at the cpu that wins the largest proportion of tests as the better cpu, regardless of what each individual test is labeled.
Anyhow, IMO the fact that this is happening in the run-up to bulldozer is probably not a coincidence
The article refers to Nigel Dessau as CFO. His title at AMD is Chief Marketing Officer. Thomas Seifert is the CFO (as well as currently holding the position of interim CEO).
is it too much to ask to have benchmarks measure more abstract things like FLOPS, latencies, IOPS? While SuperPi (and similar) don't really tell you how many iterations of Youtube you can have running while compressing files and playing Crysis, its a very clear-cut "this is what this means" benchmark. More of those, plz.
(also, how come no review ever uses the Windows Experience Index? those results may be somewhat vague, but it should be a pretty platform agnostic way of ranking things. and hey, no license needed!)
Most modern enthusiast processors score the maximum amount of points. How are you going to differentiate between those?
As been mentioned before, any midrange computer bought in the last 5 years is plenty fast for running every day windows (which is what that index measures, really).
Ugh... don't even get me started on the WIE! Let's see: DX9 GPU is automatically limited to a score of 3.5 or less, if I'm not mistaken. DX10 is limited as well, but I think it might be to 5.9 or something. Basically, WEI considers old technology as always being inferior to new technology on some of the tests. The memory score doesn't tell whether you're high bandwidth and high latency, low bandwidth and low latency, or somewhere in between -- and it benefits a lot more from tri-channel RAM than regular applications (and I can say the same for WinRAR/WinZip type tests).
Mostly we don't run the theoretical stuff because it doesn't show how everything works together; FLOPS tests usually only measure a specific kind of FLOP, and they run almost entirely within L1 cache which isn't very realistic. Real applications are best, but as noted they're sometimes hard to qualitatively benchmark.
So AMD wants benchmarks to include updated revisions that take advantage of forward thinking hardware advancements.
Anyone recall NetBurst first iteration. SSE engines to replace some of the legacy x87 floating point operations. AMD = "Oh, look, our Athlons blow away P4's in x87 floating point performance, but don't recompile for the new SSE instructions in the P4."
What a joke. This company is like a whiny spoiled little kid. "What about me!" It's like the Boston Red Sox vs New York Yankees rivalry. So sick of the whiny, "what about me" kid. Sorry Boston, you aren't the Yankees.
Very true. Well, Intel spreads FUD when it doesnt go their way. Latest example? Windows 8 support for ARM. The bullshit claims Intel was quick to spread were furiously disputed by Microsoft...
At the end of the day, they're all the same, just in different flavours.
Firefox 4 has been around for a while, so any NEW benchmark should use it, not Firefox 3.6. IE 9 is also the most recent version, so should be used, not IE 8. The fact that these are very common and popular browsers and both use GPU acceleration would have been a benefit to AMD, and if most people only care about web browsing and perhaps some word processing, it makes sense to use what most people are using.
Then you have Flash, and as much as some people may hate it, I expect that upwards of 90 percent of people on a Windows platform will have Flash installed and enabled. Flash is also GPU accelerated, meaning Intel would look REALLY bad using Firefox 4 with Flash content if that was a huge part of your benchmark. I bet Intel would complain if web browsing tests heavily weighted Firefox 4+Flash performance and it showed Intel was slower.
but Nvidia and VIA as well, leaving only Intel in the Industry Group BAPCo making it a group of one ...
If this is true, you should update your article since 3 of 4 companies dropping out does imply somethings really fishy with sysmark, while AMD dropping out against Intel does only imply AMD is afraid of the results of the new version of the tests for their new cpu-designs.
The most interesting part of this news is that Nvidia and VIA left too so it's not just AMD going crazy. Anyway,this benchmark is pointless,it doesn't take into consideration just the CPU or all the hardware (from USB ports to screen and Wi-Fii),it's unclear what kind of usage paterns are factored in and it gained no traction in the community. Not very sure why it is being used in CPU reviews when it's not supposed to reflect CPU perf anyway.If it was any good it could be used in pre-built systems reviews but it's not so why bother. Since i mentioned Wi-Fi testing,maybe you guys could include that in notebook reviews and maybe HTPC tests in all APU and GPU reviews( since for many their main PC is also their HTPC).
I agree. The entire system benchmarks are much less relevant to my interests than the specific application tests. We do a lot of encoding using x.264, so the comparative AnandTech 2nd pass tests are a goldmine for us.
These Borg collective benchmarks would be much more useful from a financial point of view if they stopped trying to collect all the results into one trademarked point value. They already conduct a number of tests using hardware traces of professional software that is financially unreasonable for an average consumer or hardware reviewer to own. If they make these individual tests center stage and give us results in seconds or data / operations per period of time then buying Borg Benchmark 2012 ("End of the World Edition!") for $100-150 is a steal compared to even a single student license of whatever Autodesk is punishing its customers with. I would rather read about individual program scores than see some giant meaningless consumer-friendly score.
Likewise, I value the reviewer's words when explaining the charts more than the charts themselves. If it's a page of Borg benchmark charts and a paragraph at the bottom, I skip to the bottom. If the reviewer remarks that a particular product scored well in a certain test, I go back up and look at the chart.
I think a better comparison than 'whiny baseball fans' (...?) is human intelligence. As computers continue to develop, there are more and more facets to their performance. Like human IQ, one single number like SYSmark doesn't really reflect what a computer is capable of doing.
AMD's CPUs can't touch Intel's in terms of sheer compute performance, period. Bulldozer might change that, but for now, it's a fact. However, as Jarred alluded, even the $60 AMD Athlon II X2 250 is sufficient for most desktop computer user's needs. Think of it like this: you're running a company and need employees. If Intel is the applicant with an IQ of 140 and AMD is the applicant with an IQ of 100, who do you hire? For most positions, you're going to hire the person who is 'good enough' - you can pay them less, and they'll get the job done.
Obviously computers aren't as nuanced as people, but AMD does in fact field some real strengths compared to Intel, especially in terms of pricing. For example, the applicant with an IQ 140 might also have some glaring personality flaw, and the definition of average applicant might be fun to talk with at lunch. But you don't see those important considerations in one number like IQ. SYSmark does not accurately represent AMD's dominance in graphics, which is an important aspect of the whole computing experience.
AMD is not 'whining' about SYSmark because it has less powerful chips. Its problem with SYSmark is that it's an increasingly meaningless single number that doesn't reflect how CPUs, and now APUs, have developed in the last few years. Hell most salespeople don't really understand benchmarks, and even many of AnandTech's forum users place bizarre emphasis on certain benchmarks. Of course AMD is not going to support a benchmark that they (rightly) believe doesn't paint an accurate picture for consumers. I build a lot of systems for the average user: internet browsing, email, office productivity, Facebook games, maybe some streaming HD content. An AMD Athlon II and an SSD cost about the same as the least expensive Core i3 and a mechanical hard disk. The AMD + SSD combo is a MUCH better experience for the average user because the system just feels snappy and immediately responsive, whereas the i3 system with its platter-based drive is sluggish by comparison.
IMHO, Intel continues to deftly leverage its strengths: manufacturing and compute power. AMD can never hope to beat or even rival Intel at its own game, and therefore chose a different path: address the weak links in the whole system. Right now, that's graphics.
Finally, if you honestly expect a benchmark to be unbiased when its President is the 'head of benchmarking' or whatever at Intel, I've a bridge you might be interested in buying. :P
If Intel is the only one left in BAPCo, and their chief benchmarking guy runs BAPCo, how can anyone say that SysMark can be trusted? That there isn't a huge conflict of interest that BAPCo is neck-deep in? No Via. No AMD. No nVidia. What's left? SysMark is now just a rubber-stamp.
SysMark just became a waste of tester time and website bandwidth. Jarred, Anand, et al., just don't bother with it. You've got a limited amount of time to run benchmarks, and you need to choose for credibility. SysMark no longer has that credibility. I'm sure you can come up with something that's much more vendor-agnostic.
Another case of whining from AMD because their CPUs do not measure up. There are plenty of other tests to measure GPU performance and AMD would be favored in those. Just put out a competitive CPU for goodness sake!!!
If you don't have any more credible source to point to, I wouldn't get carried away with lambasting Anandtech for not talking about anyone other than AMD. Perhaps NVIDIA and VIA will actually leave, but by my estimates I'd put Charlie over at SA batting at maybe .200. He also hates Intel as much as many of you people, so perhaps that's why you all like to read his site?
"If a salesperson comes to your company and mentions the suite, you know who they are pushing, it is that bad." What that really means is that most salespeople push Intel-based servers and PCs. Probably because they're faster in the tasks that 99% of businesses do regularly -- because we all know how often businesses want games to run better!
Anandtech does mention at the bottom that Nvidia has also left. The problem is, the articles title ONLY states AMD. An update would be in order, especially after it is will be confirmed that VIA has also backed out.
It is entirely conceivable that Bulldozer does not sufficiently address those areas which Intel appears to be dominating, however benchmarks really should be vendor agnostic.
I think AMD should write their own bench that takes full advantage of their APU, and Anandtech should put this chart at the top of every review. Intel can stop their whining and make a proper GPU for goodness sake!
I am not sure if you are being sarcastic to AMD or to me about my post. Anyway, at least for the desktop, I think Fusion so far is a failure. The GPU performance is not that great, and the CPU is clearly inferior to sandy bridge and maybe an earlier generation or two as well. Intel decided to not go into the GPU business, but they are very good at their primary business of making CPUs. How many generations ahead of AMD are they now?? Two, maybe 3 depending on when Bulldozer and Ivy Bridge come out and the relative performance.
So just get a superior CPU (Intel) and add a 50.00 graphics card. You will have performance superior in both areas to Llano. However, I do see a place for Llano in the notebook market (where it is not easy to upgrade the graphics) if the price is right.
And I am not an intel fanboy. The first computer I had that was really suitable for gaming had an AMD processor, and I would like for them to put out a competitive product. So far I have not seen this in the desktop since the Athlon 64 days, yet they continue to hype their products, and somehow many people still think they are doing a great job.
Fusion so far has been a runaway hit, AMD is selling every piece of silicon made for them. BTW, how many generations behind is Intel on graphics? 5? 6? How about we just say Intel is hopelessly behind in graphics and leave it at that.
back on topic.. I don't think anyone can hold it against AMD, nvidia AND VIA to stop supporting Sysmark if they feel like it is no longer a valid benchmark.
Actually, it's intel that's the odd one out. they are the ONLY company that still thinks this benchmark is up to date with reality.
And as for anandtech, i've been reading this site for a long time and in my personal opinion: stop with making the first 4 pages of ANY review bullshit synthetics, please
The news release came via AMD PR, and didn't mention anyone else leaving. I'd have to dig around for additional details and I don't want to get too far out there. Still, rather funny in a way to have a "industry group" with only one participant. :)
So let me get this right. You got some information emailed to you and you post it without checking the story because you don't want to do the job you are paid for?
How is this reassuring to the readers here? Is this the case for all the stuff that gets emailed to Anandtech?
There's no "checking the story" necessary, this is an official press release from AMD. Both NVIDIA and VIA have not issued press releases as of yet, any other information is conjecture.
What exactly am I supposed to do? AMD sent me a news release saying they are leaving BAPCo and I wrote an article about that. The fact that NVIDIA is apparently leaving as well (with absolutely no reason given) is nice to know, but that's not necessary for this piece of news. Checking the story? Um... which part of "AMD PR sent it to me" do you not get? AMD sent it, and it's about AMD, so I'm pretty sure I have the story I need. It took NVIDIA several hours to get back to us with, "Yes, we are leaving." VIA, we still haven't heard from. I'd better pull this story for six hours while we wait for more details, because other people leaving changes... nothing.
To start as a review site (not micro-blogging or news [such as DailyTech]) I would start by never publishing a press release verbatim or write and 'article' with one as the soul source of info. Otherwise you end up with this: http://semiaccurate.com/2011/06/16/intel-declares-...
by which time most sentient life has already stopped reading your site (which wouldn't matter) and you loose page views and so add dollars (which does matter).
Now that I've answered your question, I'll take the time to refute your assertion. You did not write an article about 'AMD leaving BAPCo'. This information showed up in your article, but what you wrote was an article about benchmarking software; your opening statement "What’s in a Benchmark? This is a pertinent question that all users need to ask themselves, because if you don’t know what a benchmark actually tests and how that relates to the real world, the scores are meaningless."
You go on to discuss the limitations of benchmarking software and do some editorializing on different chips and platforms (Atom vs Brazos, Llano vs SB and Discrete etc) and then attempt to justify your continued use of the inappropriate tools. All of this is fine.
Scattered throughout though, you leave journalistic integrity (and facts) behind and begin to make assumptions about an event that you've obviously not investigated or understand. "Reading through AMD’s announcement and Nigel’s blog, it’s pretty clear what AMD is after: they want the GPU to play a more prominent role in measurements of overall system performance."
That quote is what is called a lie though omission. You picked one of four key things to focus on (heterogeneous computing) and failed to even mention the others. The real list is: failure to have an open benchmark (to review what's being tested), failure to use representative work loads (heterogeneous computing), bias to Intel designs, and generation of misleading results.
That you picked the least important of the listed reasons is telling (neither option is kind so I won't call them out explicitly), as while AMD would love to have a major benchmarking software focus on what they do well to the exclusion of their competition, this hurts them much less than the other three. Failure to be open: If the result says CPU X is 20% faster than CPU Y but doesn't tell you what it's benching then it's meaningless. If CPU X is 100% faster than CPU Y at task N and task N gets 65% of the weighting but task N has no real world relevance to customer P then CPU X isn't better than CPU Y for customer P. This is very important. Bias to Intel designs: this shouldn't need additional explanation. Generation of Misleading results. This ties into the above two, but has to do with the over all packaging, of the benchmarks than anything else, so gets its own category.
At the beginning of the article you also mention that Intel will be faster than AMD in SYSmark and then spend a good chunk of the article defending the future use of SYSmark. Despite all the other text this is editorial bias and doesn't happen with good/careful authours. It leads to people thinking that Intel is a better CPU regardless of surrounding details and suggests that AMD is complaining only because they are worse.
Here's where we get to the 'it's important to do your research before publishing a story'. The fact that AMD and NVIDIA and VIA dropped out of BAPco is of critical importance. NVIDIA is a graphics manufacture in this space (no CPU intrest) and VIA has no graphics interest. This stops being a case of a poorly performing product being massaged by PR to a legitimate concern about SYSmark. 2/3 parties with interest in x86 compute left over concerns about the validity of the product, and 2/3 parties with interest in x86 gfx left over concerns about the validity of the product. It needs to be a cold day in hell for Nvidia to line up and say AMD is right. Others have been very critical of BAPco too such as the guys at opensourcemark who have documented examples of SYSmark heavily biasing results to fit Intel's designs.
For the why any of this matters beyond personal integrity (which I freely admit doesn't mean anything on the internet, you pay for the server so you get to say what you want) we have to look past Intel's behaviour (which i think is fine, they are in it to sell chips, so by all means make 'tools' that make your product look good) and at what Anandtech does.
You benchmark and review hardware. If a key tool that allows you to carry out your jobs and run your business comes under question you normally do everything in your power to check the veracity of the tool. If someone in the pharmaceutical industry tells you the cholesterol test your doctor just gave you is really only designed to sell more medication and the results are really biased, you'd expect your doctor to have checked into this for you. While you are thankfully not responsible for anyone's life, people still trust you with their purchasing decisions based on the work done at Anandtech. Given how much time the site spends parroting about how unbiased and fair you are, how you try to use tests that give meaningful results and aren't swayed by PR, these issues are serious for you.
So what are you suppose to do? Do your job. That means a little research, maybe run some tests of your own. SySmark says Intel CPU X out performs AMD CPU Y at Excel by N% you can test that. It turns out that you do this for a living, and have access to the gear to try CPU X vs CPU Y. Run a montecarlo simulation, sort a very large data set, run some macros. These are easy things to do while confirming facts about a story. That way when you sit down and write an article your readers get the story, the facts and reliable conclusions.
If you want to write for Endgaget (what you presented in this article) go write for Endgaget. If you want to be a PR rep for a company then go work in PR. If you want to write hardware reviews, then you need to actually stick to the tenants of your job all the time. You don't publish press release performance predictions, so why publish press release benchmark predictions?
Daniel
PS: VIA has also publicly confirmed it has left BAPco.
Amazingly enough, we are a technical site that often runs news stories, and the fact that AMD left BAPCo is pretty big news. Yes, even AnandTech has tech stories similar to what you might find on Engadget or DailyTech. You might look here, for instance: http://www.anandtech.com/news/
I don't generally try to read what everyone else writes about a particular piece of news and then echo the thoughts of the market; I think for myself and provide my own technical analysis. So when AMD says that they don't like the latest SYSmark, immediately the first thing that comes to mind is, "Gee, I bet it doesn't favor their APUs as much as they would like." Whether that's good or bad is a different story, but to pretend that AMD isn't politicking is ludicrous.
My assertion is that you need to know what every benchmark does in order to determine whether the results are meaningful or not. I don't care if it's SYSmark, PCMark, Cinebench, 3DMark, SunSpider, or whatever. That AMD is leaving because they disagree with how SYSmark 2012 works is fine by me. I disagree with lots of benchmarks as being meaningful (Sandra and SuperPi immediately come to mind). Running benchmarks for a news story, especially when I may not have appropriate hardware on hand, doesn't work.
So now VIA and NVIDIA confirm they have left, but no one is really saying why other than VIA apparently saying they don't feel the workload SM12 measures represents a modern user or whatever. I still won't run SM12 on laptops, just like I didn't run SM07, because it's a royal pain to do so. You need to do a clean install (no service packs or other patches), and even then it doesn't always work. Anand can do it on desktops because he doesn't have to change them up every single review. Even so, including SM12 doesn't make an article any worse, unless the article were to then conclude that because SM12 favors CPU x, you should buy CPU x.
If the results from SM12 correlate with what we see in other CPU-centric tests, that's fine by me. As long as you understand it's a CPU/general performance metric and makes no demands of the GPU, you know what you're testing and what the results mean. Testing Cinebench and then complaining that it doesn't measure SSD performance would make as much sense to me. When I think of system performance (which is what SYSmark purports to measure), mostly I'm looking at CPU, storage, and perhaps a bit of GPU.
As I mention elsewhere, most of the people I know (i.e. not computer enthusiasts or gamers) still have no need of a good GPU. GPGPU isn't even remotely mainstream, video works fine on SNB (good enough for everyone besides HTPC purists), and the only thing Llano really does substantially better than Intel is running games on the integrated graphics. Frankly, Llano really isn't a good GPU; it's just a good integrated GPU that's only as fast as a $35 discrete GPU. If the drivers get worked out, Llano could make for a good HTPC setup as well, but right now that's not happening either. So, Llano is good for laptops, but on desktops it just doesn't mean a lot unless you absolutely refuse to buy a dGPU.
Sounds like the lack of GPU aspects in SYSmark 2012 isn't actually the problem if that article is true (and let's be honest: it has plenty of parts that ring true). The problem with SM12 is that Bulldozer is going to suck, and SYSmark just points this out.
So pull your head out of AMD's ass. This announcement comes from AMD's MARKETING DEPARTMENT. Do you need anything more to prove that this has nothing to do with engineering and architectural superiority? Marketing is trying to get people LIKE YOU to ignore any benchmark that shows how badly AMD's CPUs are falling behind. And it's working. Enjoy your new Bulldozer system, to replace your amazing Phenom II system. Me, I'm going to continue running my Core i7 for a while longer, and probably upgrade to Ivy Bridge or its successor, because Bloomfield is already going to beat Bulldozer, never mind Sandy Bridge and Ivy Bridge.
That article is pure garbage, the tone of the writing and the "anonymous source" should make it pretty obvious that you're looking at is somewhere between propaganda-grade and tabloid-grade. Besides, compare Intel's Sysmark superiority to any other real world benchmark, the Sysmark score is always disproportionate to real life, and always in Intel's favor. To say AMD doesn't have a legitimate reason for leaving is absurd, and then you blame the victim by accusing AMD of trying to do exactly what Intel IS doing.
Donnie hit the nail on the head, as evidenced by Jarred's back-pedalling reply being longer than the actual article. Besides, the lesson we've learned from this is that AMD's "old" K10.5.2 architecture in Llano isn't that far behind SB, unless you build machines to run Sysmark. If Bulldozer makes any improvement at all, then it'll be just that much more competitive.
I think that ALL benchmarking is subject to the eye of the beholder.
For example, in my current work right now, we run finite element models using Nastran and LS-DYNA. I've also used to run computational fluid dynamics codes such as Fluent and CFX.
But most CPU benchmarks are rarely as intensive as those programs, and even LINPACK -- depending on how you write/compile/run it - it will have an impact on the results.
It is for this very reason why those software vendors have developed "standard" benchmark cases that tests a variety of things including hardware and software improvements/features.
The down side is that a lot of those programs are a) expensive and b) the benchmarks themselves are time consuming. (~25000 seconds for a 3-car crash model, or about 7 hours for one run). So, if you're testing a bunch of new processors, and you're testing the scalability of the additional cores (for example), you can easily spend between 2 weeks to a month just with that one program, running that ONE test.
Another example is the in the realm of graphics processing. While most people test with computer games, again, because of what I do; we use Altair Hypermesh/Hyperview. When you're looking at a model with 2.2 million nodes; that's a very heavy load for a GPU to handle. And I can almost assure you that even the best, the top of the line current generation consumer cards won't be able to take that kind of loading in stride while getting insane framerates. And while the point about those cards not being designed for such a workload is a valid one, why should you pick a benchmark that caters to what the card does well?
That's like you're going to measure 0-60 performance against a drag racer, and then complaining that it isn't design to turn, so you're not going to put it on the Nuerburgring.
If you look back at the initial release of Windows Vista(as much as some people hated and still hate it), it did bring 3D to the desktop with the Aero theme. This is where having a better GPU really stands out, and still does. Move a window around, and visually, a better GPU with Aero does improve the experience.
Yes, it's minor, but the fact that we have GPU power improving the desktop DOES mean that GPU power should not be discounted. There are other areas where GPU power comes into play, and ignoring the overall feel of how good it feels to use the system SHOULD be a part of what benchmarking is about. As the article stated, Firefox 4(and 5 beta) plus IE 9 and other applications also use GPU power to improve performance. You want to do anything involving graphics, and the GPU can really come into play, so why not make it as important as how quickly a spreadsheet calculation can be run?
I don't want to come of as a total tool with my first comment here but the title is probably the most misleading one i have ever read.
"AMD resigns from BAPCo, NVIDIA joins them" makes it seem like AMD left and NVIDIA joined BAPCo doesn't it?
"AMD resigns from BAPCo, NVIDIA leaves too" would be 400 times more accurate and impossible to misinterpret.
On the topic at hand however, did anyone trust SysMark before? Since 2002 Intel has been making the test continuously more biased towards their processors. Everyone should know this.
Indeed, buy an intel cpu and do spreadsheets all day long. As long as you don't use the graphically intense internet, value quality visual entertainment or game casually or seriously, you'll be fine with HD 3000 integrated.
That's a joke, right? The low-cost Intel based machines(Under $550 USD) may have faster processors in them, but the rest of the components in the system are cheap and have many problems. To get many low-cost Intel based machines to be stable, doing things like turning off the power conservation stuff, keeping the machine from going to sleep(since in many cases waking up from standby causes system instability), and other things.
AMD based machines may have a slower processor, but the supporting components in the low-cost machines really are better quality overall.
I suppose the "video" part is debatable when comparing Llano with SNB, but Brazos is clearly superior to Atom in that area. I'll defer to Ganesh for the video stuff; I believe he is working on an article showing how Llano fares in various HTPC scenarios.
2D: SB is a bit slow and buggy 3D: SB is slow, CHOPPY and BUGGY (well, compared to previous Intel efforts, it hass really good drivers...) 2D video: SB is OK but the image quality in line with Geforce before the FX series.
Amateur video encoding: SB wins easily thanks to QuickSync. Not thta this has anything to do with Graphics ...
Calling Anandtech's assessment of SB vs. Llano graphics unfair to Intel is crazy BTW. AT is one of the biggest fans of GPU awesomeness SB brought.
So when AT is forced to admit SB IGP is not better than Llano IGP, it is really something ...
I don't see any comments about software versions. The Anandtech article mentions old versions of IE, etc. As a low-level employee of a local government, we use older versions of software - XP Pro, older IE etc. Office 2010 is soon to be rolled out. And the BAPCo response does mention Sysmark is for organizations.
So maybe it is more useful for governments and large, slow-to-upgrade organizations, and less useful for on the ball individuals and small businesses. Plus, remember the adage 'no one ever got fired for buying IBM". No one will get fired for buying Intel either.
I think AMD Nvidia et al did the right thing; consumers have to be informed and decide on their total needs.
what I meant by that statement is that AMDs attitude seems to be to talk up their products, then be late to market with a product that does not measure up to the hype.
And when they get criticized, they say they are still better, you are just using the wrong test. You hurt my feelings (made my product look bad), so I am just going to ignore your.
No, AMD has the attitude that the entire system is what people use, not just a single component at any one time. Now, if you saw a gaming benchmark that tested both CPU and GPU performance, but then discounted the CPU results, wouldn't that leave Intel looking like the platform to ignore?
The problem isn't that Sysmark ends up showing Intel being ahead, but that in any benchmark that shows AMD doing well, it ends up not really affecting the OVERALL score.
When it comes to doing web based benchmarks, it stands to reason that Firefox 4(which came out back in March) would be used since it has been out for three months, and IE 9 would be used as well. Both of these browsers have been out for long enough where a NEW benchmark that is labeled as 2012 would use them.
I agree with other posters when they say that tests like doing an Excel calculation on 32,000 rows should be seen as not being terribly important, so why let THAT test result have a higher value in the overall result than some other tests?
I run sysmark'07 all the time and the GPU plays a HUGE role in the score. Out of 4 categories, 2 are GPU bound. You can add ram and a faster CPU and the scores hardly increase, but if you change the screen resolution or update a video driver the scores dramatically change. I would have thought AMD would like the fact that sysmark is so GPU bound...
If 12 is anything like 07, you'd think GPU oriented companies would be drooling over it.
I was really expecting more from you Jarred, but it seems to me that you're also on intel's payroll. How do you dare compare Llanos powerful GPU to intel's crappy graphics in anyway?
And as much as you (and Anand) hate it, GPGPU is becoming part of modern-day apps. CPU performance still matters, but GPU performance matters more (if you really did your homework, you'd know what I'm talking about).
Name one major task that every day people actually do that needs GPGPU. Yes, you can do some amazing stuff with such chips, but for my mom who only checks her email and forwards spam my way, what does she miss out on? Llano's GPU isn't that great; it's merely better than Intel's IGP. Llano is basically the equivalent of a $35 GPU with a $75 processor, but with better power characteristics. We provided a complete review that showed where Llano does well, but while the Llano GPU is three times faster than the previous HD 4250, it's only about 30-50% faster than HD 3000.
Need some more food for thought? GPGPU performance running on an HD 5670 is more than double what Llano can provide. Anyone that cares about GPGPU is still going to need a discrete GPU. But you have to go and accuse us of "hating" GPGPU. Um, what? Where did I say I hate GPGPU? FYI, I'm running about 2200Mhash/s on ten varieties of AMD GPUs. (Yes, AMD, whom I "hate" -- because for this particular task they're about 3x to 5x faster than NVIDIA, and my overclocked i7-965 would be pulling maybe 21-22Mhash/s, or about twice as fast as an AMD E-350 that uses 1/10 the power.) You wouldn't know that, of course, because it's irrelevant; it would only be relevant for people that want to build a farm of PCs to do GPGPU computing.
Your talk of bias is unwarranted; there's nothing in this news post that's even remotely biased. I point out that AMD has issues with SYSmark, and I point out what those issues are, and I even show how AMD has plenty of facts to back them up. And you accuse me of bias against AMD for... what? Pointing out that they're right? That SYSmark 2012 is just a benchmark that only shows a few facets of performance, and if those facets are chosen "correctly" the result can be manipulated to put one part above another? I guess it's also bias when people point out that 3DMark runs poorly on anything without a discrete GPU, and that 3DMark11 requires DX11 support.
If SYSmark 2012 is a bad test because it puts more of an emphasis on the CPU and benefits Intel, then a gaming test that puts more of an emphasis on the GPU is equally bad. We use both types of tests for a reason, because both CPUs and GPUs (and SSDs) make a big difference in what a PC can do well and where it struggles. Yes, that's totally biased and I should be ashamed for saying such things!
Jarred, you guys tested a DX11 APU in unsupported DX9 mode for your review of Llano without even mentioning that the SB competition does not support DX11 and that Llano was designed for the new API. I call that bias. I assume when Intel finally has a DX11 product, the test bench will be updated ?
I do wish people would keep comments about bias to themselves. No offence intended, but it only serves to make the complainants look silly and take more time to read through the comments.
The fact that SB doesn't support DX11 has been said a million times before, but for completion's sake I suppose it should be mentioned.
With a GPU like Llano's, you're not likely to be gaming at high resolutions or high detail levels with DX11. If, and when, Intel do support DX11, their past track record suggests it'll be of no use for their products either due to the lack of performance, so the comparison isn't likely to be worthwhile, and thus the point is moot. I'd really like to see DX10 and 11 performance figures for Llano but even with the fastest RAM available, is it really going to toss us a pleasant surprise? Far better than SB's HD 3000, sure, but not quite at the same level as a decent mainstream card.
I'm glad you took the time to read the review. Bumping up the API would probably bump down SB's already not so stellar performance, and that would be bad. Far less test in DX11 mode where Anandtech would have to explain to intel why a review was published with only zeroes in the intel column where numbers should be, in a case where ' lower is NOT better'.
On the topic of the review itself, It should provide ALL relevant information to a potential customer so that they can make the most informed choice upon purchase. Failing to mention that a potentially sub 700 APU based laptop would provide a more future proof media and gaming path than a SB based laptop and be much cheaper than SB+Discreet.
Such an omission from such an experienced review site seems motivated by external factors.
In terms of the desktop Llano, Anandtech haven't actually reviewed it yet. In the final review, you can bet we'll see much faster RAM and how Llano can benefit from it. We should also see better and more benchmarks, though I would hope they don't include SysMark... for obvious reasons. :P
BAPCo and SysMark have been a joke for the past six years. No reviewer with a clue even bothers with it, and no sysadmin gives it any weight when making a decision.
The problem isn't just that it completely ignores the GPU (which is a very important part of a desktop PC and an even more important part of devices like HTPCs, tablets, etc.), it's that all its benchmarks are based on completely unrealistic operations, carefully picked to make sure Intel wins by the biggest margin possible. For example, their Excel benchmark is based on the time it takes to load the application followed by an insane amount of sorting operations (which turned out to be the operation where Intel CPUs had the biggest advantage).
The real mystery to me is why AMD and NVidia still had any sort of connection to BAPCo.
I do not really know why you, or anyone uses sysmark or any of these garbage synthetic benchmarks anyway. It's about the furthest thing away from real world performance ALWAYS, You may as well pick some arbitrary numbers, and then go use Sandra to claim you've go the biggest eManhood on the planet. HDtune, HD tach etc makes more sense, but even with these, they are NOT a good indicator of real world performance either. So again . . .
In the end, the results only offer an obscure result that only tend to obfusticate the real outcome. SO in other words, someone is making themselves important ( and rich ) by providing totally useless system information to others.
Gaming benchmarks make sense because it can be an important factor for someone buying equipment. Video encoding/trans-coding is important, because again; Different hardware can produce vastly different performance levels. Even Photoshop benchmarks are important to many. But who here runs an app called Systemmark other than for bench marking ? No one . ..
Exactly, sysmark measures something nobody needs. Gaming and video encoding is what matters today, and it can be measured in real test.
If Anand chooses to to use sysmark and whatever like it, it will be bad for consumers, and a small step to keep Intel away from making the most usefull products even if they have the best technology and development ressources in the world. An idiotic outcome for the consumers.
Use tools like the HD benchmark suite you developed. It was a bit heavy on the 4k random write - wonder why - and lol later on post Intel G2 for that, but at least it reflect real usage, and is a valuable tool for the consumers to select ssd.
Gaming and video encoding? Really? Gaming is something that many read Anandtech and other sites to check performance for, but video encoding is something that really does not take up all that much computer time for MOST people. Web browsing benchmarks should probably be up near the top when it comes to overall importance, with watching video content(from different sources) being way up there as well.
I can see "time to open and extract files from an archive" being up there as well since if you download drivers, they DO take a fair amount of time to extract. Creation of archives should be taken into account too, but not as much as extracting content. For much of this of course, picking good hard drives and systems with a good SATA controller comes into play.....except, AMD has SATA 3 and Intel does not at this point in the latest chipsets. I could see Intel taking second place to file copy operations if you take the latest AMD chipsets compared to Intel chipsets as a result.
Honestly speaking, IMO Anandtech is NOT showing any bias against AMD... anyone thinking that did not read the article properly and/or are nitpicking. After reading the entire article I have to agree that AMD has done the right thing. Bapco in it's current state is just useless and skewed towards Intel.
And before someone accuses me of being an AMD fanboy, don't bother. Of the 4 computers I own only one is AMD based. That's simply because for the roles they play in my life and pricing in my neck of the woods the Intel systems offer me more bang for the buck period. I honestly believe that Intel is or was guilty of anti-competitive behaviour and underhanded dishonest business practices in the OEM/corporate space and should have paid a bigger price. It's not like market share can be regained overnight although currently Intel deserve to be dominant in the corporate space due to their obviously superior current CPU's.
In fact I am already planning on upgrading my gaming rig to the Sandy Bridge platform (although Ivy bridge has to be considered ) and I will be buying whatever graphics card/s represent the best bang for the buck at the time of purchase whether it be red or green since bang for the buck is the most important consideration for me seeing that I have 3 computers constantly in use in my household (excluding laptops). Ofcourse most ppl in my situation will opt for medium level hardware i.e just a little more than I require to be adequate for the next 3-4 years.
That said benchmarks from sites such as Anandtech is an important tool for me when speccing a new pc BUT is ONE of many considerations in that process. Only a fool would base their entire spec on a specific benchmark.
The point that many ppl seem to miss is that other than showing one the top peforming hardware it also allows us to see other hardware's relative performance which is what I need when speccing most pc's - i.e how close in performance is a i5-2300 to an i7-2600 or Phenom II 955 BE in a specific task like encoding for instance. That is what I use to spec a system suitable to the main tasks it will perform.
People ask which computer to buy and I respond with tell me what you plan to use it for? Both AMD and Intel are right. On AMD's point people really do need more graphics power. And on Intel's point a little dedicated logic (i.e. quicksync) goes a long way.
While difficult to impossible benchmarks should be somehow normalized to performance per watt. Clearly not all case benefit from that, because sometimes raw performance really does matter. Or maybe performace per dollar is a better indicator.
To give an example, I game on a system with a lowly pentium dual core 1.8GHz with 1MB of cache and a nvidia 8800GT 512 at 1920x1200 with all the eye candy (dx9) turned up. Sure could benefit from a faster CPU, but my system is over 24fps sustained...to me if movies are good enough at that speed, the so are games. And as was pointed out in the article most of us need and SSD. I've certainly noticed that when my system us lagging the harddrive light is solid on, not my measely CPU above 80% or my 2GB of system memory all used up.
Maybe for AMD and Intel the benchmarks should gage performance per profit or cost to manufacture, because would be more relevant to them.
Ultimately I want to know raw performance and performance per dollar for the applications that I run
Well, part of the problem is how the benchmarks are performed.
For example, 3DMark, SYSmark, etc. are all pre-packaged benchmarks are run in more or less the same way.
However, as I've also mentioned, how those are compiled can have a HUGE impact on performance.
Like, if you take the LINPACK CPU floating point benchmark, and compile it 100 different ways, you're going to get 100 different results. So then that raises the question "what's the actual performance of my system?" And then add to that, the complexity of how each CPU actually executes what it is being asked to do. I'm sure if you profile the LINPACK application while it is running, how it runs will also affect performance.
And that's just a very simple example.
Now you go with some of those aforementioned benchmarks; and the differences can compound and the result can be polar and diametrically opposed.
And then with the programs that I've mentioned that I use to benchmark my systems (and to some extent, Anandtech has ran the Fluent benchmark); not knowing how those programs work (because they're not canned benchmarks) can affect how the results too.
LS-DYNA has conducted testing and research that if you specify a pfile for the MPP decomposition -- it can reduce your runtime by 33%! That's nothing to sneeze at.
you can do that on anything from a lowly AMD Brazos netbook to a hex-core monster system. Yes, we did leave out Atom, because there are certain areas where it falls short
bsn reports that discussions with an assortment of AMD insiders indicate that AMD had to kill SYSmark 2012 because it accurately reports that Bulldozer CPU performance is remarkably weak. Apparently, Bulldozer performance was weak enough to kill sales to a large portions of the government market. The world requires a semi credible explanation of why SYSmark death was required and apu/gpu issues provide cover. When the full SM12 whitepaper is released it will be possible to speculate whether same alternate weighting has any material impact on the Intel vs AMD ratings. Bulldozer scores on SYSmark 2012 may speak volumes about the credibility of these reports.
Would be cool to see AnandTech update this article, but I presume the lack of actualy Bulldozer scores means they won't. We'll see what BD has to offer in... what is it now, September? Maybe they can push it back to November for Black Friday? But if SYSmark 2012 shows poor performance on Bulldozer and Llano, I'm willing to go out on a limb and state that we'll see poor performance from Bulldozer and Llano in PCMark, Cinebench, SunSpider, Photoshop, etc. And you know what will happen then?
All the stupid AMD fanboys will come out and say those benchmarks don't matter either. They'll bitch and moan that AnandTech has sold out to Intel for pointout out that Intel is still faster. And on Bulldozer, Intel is REALLY going to be faster, because AMD won't have the weak "we have a real GPU inside our chips" excuse of Llano. When Bulldozer requires a discrete GPU just like Intel (actually, Intel doesn't, but for gaming it does), then all AMD has to fall back on is their CPU performance. I'm sure we'll see promotion of some benchmark that is heavily integer dependent and makes optimal use of the two INT cores per BD core, but integer performance is really starting to become secondary. The same people who trumpet GPGPU will probalby try to claim that the BD design makes sense, but the BD design is the exact opposite of GPGPU: more simple cores that only do integer ops with one FP core per BD module!
Mark my words: Bulldozer is going to be as big of a disappoinment as the original Phenom, the Phenom II, and now Llano. Llano is only good for laptops; on a desktop you already have far better CPUs (even from AMD), and if Llano's IGP is enough to keep you happy then you don't actually use a GPU much. Here's to hoping AMD's new CEO actually has a clue on how to compete. (Hint: trying to discredit benchmarks that you originally promoted simply because they no longer work in your favor is not competing.)
can't say i've ever cared about any of the synthetic benchmarks.
when i look at upgrading components i like to compare cpu/gpu fps scores for real games exclusively, since everything else is at best tertiary to me as far as home computer use goes.
Synthetic benchmarks are meaningless to me and one part I don't pay attention because I can't interpreter one abstract overall number in areas where I punctually looking for efficiency. I read because of the less abstract numbers in games, video or content creation, 3d rendering, productivity, etc And I am almost sure this is what most people come here repeatedly to read.
Until I read how a particular processor does on a specific task like 3d rendering I am not satisfied. But also from time to time I download Cinebench, for example and do my personal test.
To be honest if you decide not to include that information I will miss very little.
When I first read that even Nvidia was leaving, the "alarm bells" started ringing. Now with VIA tossing in the towel, it is very clear something is very wrong here. The remaining companies holds the answer to the problem or were causing the problem.
Well, any group of companies who does things in secret is certainly wrong. Secrecy allows for legal exploitation and is normally done in a covert way. What is new in the name of profit ?.
Transparency is the key to success but some monopolies have just bad habits they had to play.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
116 Comments
Back to Article
TrackSmart - Tuesday, June 21, 2011 - link
Benchmarks clearly have their place, but the danger is always in the interpretation. Some folks will always look for that simple, single number to define "overall system performance", but how can you ever fairly create such a metric? Maybe it's better to end the single-metric game and just list a few simple subscores that better represent several common usage scenarios. I don't know what the best categories would be, but they might include Gaming, Video and Content Creation, Office and Productivity, and Statistics and Computation.And then you'd have to resist the urge to somehow weight these individual categories into a somewhat meaningless single metric...
mgl888 - Tuesday, June 21, 2011 - link
I absolutely agree with the idea of using several subscores.I've long stopped look at "overall" PCMarks, Sysmarks because how your computer performs no doubt depends on what you're doing.
However, I am more on AMD's side regarding this dispute. The significance of the GPU has certainly increased in the past decade for the general consumer and Sysmark should adjust their weights accordingly.
BSMonitor - Tuesday, June 21, 2011 - link
"However, I am more on AMD's side regarding this dispute. The significance of the GPU has certainly increased in the past decade for the general consumer and Sysmark should adjust their weights accordingly. "But, the software and compilers aren't there yet. Now that AMD has a more forward thinking hardware, they want the benchmarking companies to jump on board right away with their design?? Or they'll take their ball and go home.
What if Intel dropped out back in the P4 days because software/benchmarks weren't taking full advantage of SSE. Eventually they did, and P4 dominated Athon XP by the end.
Myrandex - Tuesday, June 21, 2011 - link
At the end the P4 was fighting against the Athlon 64. There was a time when there was a period of the P4 being better than the AthlonXP chips, but I don't think it was that drastic. But the Athlon 64 came out and suddenly the tables were turned all the way until Core 2.Belard - Wednesday, June 22, 2011 - link
Sorry, but in general - the P4 was POS CPU since it first farted out of intel's FABs. Even the P3 had a higher IPCs.When the first P4s came out (Socket 423 1.4~1.8GHz) were battling out with PIII and AMD Athlon. Not the AMD64 or AMD-XP class CPUs.
So clock for clock, the Athlon 1.4s were up there or better than P4s (same with PIII, up to a point). Even thou P4s had double the memory bandwidth.
Then AMD-XP CPUs were a good series against the first P4's 478s.
Then came the AMD64s and the trend continued... 2.0Ghz AMD64s that were equal to 3.2+Ghz P4s.
The area that P4 did well was rendering 3D graphics or video - liner work. Gaming, everyday work... they were slow. At one of my jobs, we have some 3.0Ghz P4s that feel so sluggish compared to any AMDs of that era.
Intels anti-competition business practice kept AMD down (AMD with more money would have more money for R&D).
Intel was smart when they made their Centrino platform with the Pentium-M, not based off of P4's Netburst. Its very much a PIII / AMD design layout. Intel realized that Netburst was crap and the Megahertz race didn't amount to squat anymore.
When Core2 came out (based off the Pentium-M design), AMD had just reached 40% desktop CPU market share. Walk into an Office Depot/Staples - you would see 8 out of 10 PCs with AMD CPUs.
Core2 kicked AMD in the balls. :)
SlyNine - Thursday, June 23, 2011 - link
Yea, and like he said, There was a period that the P4 (northwood B and C) out performed the Athlon XP. This was a period before the Athlon 64s came out.But Even an old Athlon 64 3800+X2 (dual core) is fine for 90% of the population, a 6 year old chip. When the 3800X2 came out, there was not a 6 year old chip that could say the same. Boy times have changed.
silverblue - Thursday, June 23, 2011 - link
Not per clock though. Athlons were still faster for most things. It's just that we were getting 2.6 - 2.8GHz P4s and the Athlon XPs were still lurking around 2GHz.yuhong - Saturday, June 25, 2011 - link
I think memory bandwidth helped too.phu5ion - Thursday, June 23, 2011 - link
And if it wasn't for the Athlon 64, we'd all still be using single-core Netburst arch in 2011. :Pyuhong - Saturday, June 25, 2011 - link
Yea, when AMD Opteron came out in April 2003 around the same time as Northwood C, it was based on the B3 stepping which had more errata and was limited to DDR333, which is why they waited until C0 stepping that fixed these problems to release the Athlon 64 even though releasing it earlier would have killed the Northwood C immediately.ahmedz_1991 - Tuesday, June 21, 2011 - link
Amen to thatsinigami - Wednesday, June 22, 2011 - link
Dear AMD, i believe i have found the benchmark you are looking for, and it is called 3DMark.P.S. Dear AMD, i expect a finder fee for solving your ills.
bhima - Tuesday, June 21, 2011 - link
In terms of productivity, the GPU isn't really integral in most applications. The entire adobe suite and office suite will run better on a Sandy Bridge i5 with an Intel GPU than the Llano's. I really only see the GPU truly affecting performance in 3D applications, and even then you would probably want a production class GPU instead of the 6000 series on the Llano processors. I see the Llano processors as a way for gamers to get a solid, mid-grade gaming laptop for under $800 but other than that, they just aren't better or even as good as Intel's offerings.SlyNine - Thursday, June 23, 2011 - link
I don't know if i could call the performence numbers Ive seen midranged. But I havent seen it with faster ram yet so maybe.designerfx - Tuesday, June 21, 2011 - link
someone else posted this elsewhere, but the most significant fact here is that all the GPU and CPU manufacturers have stepped out aside from Intel.sinigami - Wednesday, June 22, 2011 - link
why can't they just use 3DMark?which makes me wonder what next?
Intel denouncing 3DMark?
aalbionthomas - Sunday, June 26, 2011 - link
ONLINE STORE :
www styl eown com
-------------------------------------------------------------------------------------------------
1) More pictures available on our website --
3) Perfect quality, small order accepted .
4) 100% safe door to door delivery, within 5 - 7 days air express for small orders .
5) We have lots of jerseys in stock
-------------------------------------------------------------------------------------------------
www styl eown com
-------------------------------------------------------------------------------------------------
6) Letters and number are sewn on b2cshop body, 100% embroidery
7) Size: .48, 50, 52, 54, 56, 60
8) Delivery by UPS, DHL, EMS door to door
9) Delivery in 5 - 7 days
-------------------------------------------------------------------------------------------------
NFL,NBA,MLB all are 18usd!!!!
Beenthere - Tuesday, June 21, 2011 - link
Benches exist for just CPU or GPU testing if that's what people want. What most consumers want to see is real world system performance. As noted different users have different needs and priorities so using the most approproiate benchmarks is important if you desire objective, relevant test data.For years Intel has "influenced" benches to try and highlight their CPU core speeds. When AMD proved that core speed was not as important as system performance then Intel tried to get bench results weighted in their favor. This is nothing new as GPU makers have also done this for years. What is unacceptable IMO is how unscrupulous some bench producers can be. We all know money talks and Intel spends plenty to get what they want. As long as Intel can buy the bench test designs they desire, consumers will continue to be misled.
JarredWalton - Tuesday, June 21, 2011 - link
To say they're being "misled" is in itself a misleading term. What Intel is doing is working to get benchmarks that show their CPUs are faster--and they are (at least since P4 was retired and C2D showed up). AMD has a point that GPUs/APUs can provide a better experience in some cases, but there are a lot of things that still benefit from a good CPU. Actually, I reference this in the text, but for many users replacing an HDD with an SSD would be better than a faster CPU (since most users run a bloated pig of an Internet Security Suite).If you do video and imaging work, Intel's current CPUs are usually superior. If you just run basic office/Internet tasks, Intel is technically faster but it's mostly meaningless as both sides are "fast enough". Even for video playback, AMD isn't vastly ahead of Intel--SNB can handle common video files perfectly well, and it's only the more esoteric stuff where they falter (and AMD isn't perfect there either). Where AMD is really ahead is in graphics, specifically when it comes to gaming, but you can easily fix that deficiency by adding a $50 GPU to an Intel system. So I guess where AMD is really ahead is in pricing for a certain minimum level of performance.
mmatis - Tuesday, June 21, 2011 - link
Is the Intel quad core really faster than the AMD hex core in video work, dollar for dollar? I'm running Corel MovieFactory 7, and the six cores seem to do rather well...Alexvrb - Tuesday, June 21, 2011 - link
Jarred, I think the point is that with an AMD Fusion chip, most users would get the benefit of some pretty solid GPU acceleration compared to a competing part, without the need for a dedicated GPU. In case you didn't notice, most people don't build their own systems. They use OEM boxes, and even adding an entry-level dGPU could easily add $100 to the system cost - assuming they don't force you to bundle a faster processor or upgrade to a fancier model to get a dGPU in the first place.So for things that people use every day, like IE9/Chrome/FF4, Flash, etc, that all benefit from hardware acceleration, I think it's a little unfair to not factor that in properly. GPU acceleration isn't fading away, it's growing. If SYSmark says that a $100 Intel chip is 50% faster than a competing AMD product at the same price, when the AMD product in actual use is as good if not better, I can see AMD's problem.
As for Nvidia, their angle is a little different I think, but no less valid. If you test a system with Intel's integrated HD3000, and then test it with a "costly" (OEM) add-in Nvidia GPU, SYSmark may tell people that it's not significantly better. "Don't bother, it's not worth the extra money." That might not sit well with Nvidia, and I can't really blame them.
Oh, and that's just for desktops. How much to "add a $50 GPU" to an Intel laptop, you think there, partner?
JarredWalton - Tuesday, June 21, 2011 - link
If you want a laptop with dGPU, it's about $50 to $100 more than a laptop without (depending on dGPU). Considering you can already get Optimus with GT 525M level graphics and better CPU performance for around $700, well, I've already covered this plenty in the Llano review. Plus you're discounting how well Intel's HD 3000 handles things; sure, Llano is faster, but on average it's about 30-50% in games, and for things like viewing YouTube or surfing the web, it's really no faster (and sometimes slower). Outside of gaming, name me one area where an Intel Sandy Bridge system truly struggles. I've tested several, and I can help out: they don't struggle. With anything. Neither does Llano, but again outside of gaming Intel is still faster.Lemon8or - Wednesday, June 22, 2011 - link
HD 3000 can't handle Direct X 11 titles so in there you'd see an infinite performance improvement. :PAlexvrb - Wednesday, June 22, 2011 - link
You beat me to it. But not just DX11 games. It also lacks CS 5.0. So for HD3000 you'd have to fall back to what, 4.1? Above and beyond that, performance using CS or OpenCL will not be as good as it is with Llano or a dGPU.But according to SYSmark, that isn't relevant. That's their beef with BAPCo. I mean really, IE8? FF 3.6? Gee I wonder why they chose pre-hardware-acclerated browsers... oh hello there Intel-logo briefcase stuffed full of cash!
TrackSmart - Tuesday, June 21, 2011 - link
Exactly my point. What do you mean by "overall system performance"? It's obvious that all modern processors can handle basic word processing and internet usage. But there are also lots of forum posts from sad 14 year-olds wondering why their brand new laptop can't run any games despite the fact that the salesmen said it was a really fast computer.How can you tell if your single number is high because the processor is ridiculously fast, but the GPU still can't play games. Or if the GPU is fast, but the processor doesn't have enough muscle for running simulations? And if both systems get the same overall score, but are so clearly different in capabilities, what's the point of your single score for overall system performance?
TrackSmart - Tuesday, June 21, 2011 - link
To finish the thought: In a world where laptops have several scores next to them, little johnny could pay attention to the one that has a high score for gaming and use that to make a semi-informed decision for his needs. Sure the gaming tests themselves will be subject to debate, but less so that the "overall system" score.jleach1 - Sunday, June 26, 2011 - link
In what situations are AMD CPUs faster than same generation Intel chips?I can only think of 1 case-- 3d applications.
We can go on all day about performance/frequency ratios, and performance/watt ratios, but only one thing matters:
And I'm paraphrasing here: "virtually all current CPUs will be more than adequate for 90% of users (or usage scenarios-- whatever it was)."
Intel is flat out faster at that 90%.
And if you fall within the other 10%, and that 10% matters to you at all, you need to be looking at discrete solutions.
I could say this: AMD GOT ROMPED at the 90% that matters to most people.
Now, I could play devils advocate and say that Intel is in fact muscling it's way through to better bench scores... But I could just as easily say that AMD VIA and Nvidia made this decision because they were lacking on that 90%.
It boils down to this: amd is going to do what's in it's best interests...and if that means crying wolf and forming a gang with the people who really care about iGPUs to boycott a benchmark, they're gonna do it.
They have a huge interest in seeing that integrated graphics scores matter in benchmarks...just like intel has an interest in seeing that it's strengths score well.
If you think this is about justice, your sadly mistaken. It's pure business, plain and simple.
Stargrazer - Tuesday, June 21, 2011 - link
Except for when it comes to SSDs. :)
TrackSmart - Tuesday, June 21, 2011 - link
Ha ha. I often think the same thing when I see the SSD reviews (in a good natured way). Anand and company have to work pretty hard to find tests that show a large spread in SSD scores. And they have to do things that most users would never do.But the reviewers often mention that once you've gone SSD, the differences are not huge.
Though maybe those disclaimers need to appear more often. And they might advise folks that at at given price point, it may be better to step-up in capacity (from 90GB vs 120GB) than to step-up in SSD performance (from really-fast to really-really-fast). But Anandtech readers are hopefully smart enough to figure that out themselves (or by looking at the PCMark Vantage scores that show only small differences from the slowest to the fastest SSDs).
Wierdo-X - Tuesday, June 21, 2011 - link
Intel's the only company left, all other companies joined AMD in abandoning the benchmark!piesquared - Tuesday, June 21, 2011 - link
For anyone to use any BABPCo benchmark as an unbiased performance metric is a confession of pure bias and it's been know for years. It all comes down to who yells the loudest, and in this case just like many others, intel has had it's ground troops out trolling forums and tech sites to obscure the message. I read this blog as a precurser to Anandtech full BAPCo endorsement.Here's a little nugget worth considering:
"AMD also points out that the president of BAPCo happens to be the head of performance benchmarking at Intel. "
http://www.newsweek.com/2009/06/18/hurry-up-and-ty...
TruTech - Wednesday, June 22, 2011 - link
Here's a little nugget worth considering:The President of BAPCo is not the head of performance benchmarking at Intel. The President works for Dell. You, just like AMD don't have your facts straight.
FatFire - Wednesday, June 22, 2011 - link
Shervin Kheradpir is the President of BAPCo and also Director of performance benchmarking at intel.And Shervin Kheradpir also helped Futuremark developing PCMark.
So please inform yourself before posting rubbish.
TruTech - Wednesday, June 22, 2011 - link
Actually, Gary Lusk is President of BAPCo. Please inform yourself before posting rubbish.fdel - Wednesday, June 22, 2011 - link
Did some random googling and I couldn't find anything that links someone called "Gary Lusk" with BAPCo, while articles that say Shervin Kheradpir being president of BAPCo were plenty - some were even BAPCo's own press releases.Don't know who's right here, just saying what a google search came out with. It sounds like Shervin Kheradpir is the real deal but who knows, there might be a secret conspiracy.
For those interested, specific phrases searched were "BAPCo president Shervin Kheradpir" and "BAPCo president Gary Lusk". Only the first page of search results were examined.
raddude9 - Tuesday, June 21, 2011 - link
Nvidia and VIA have also ditched SYSmarkSpoelie - Tuesday, June 21, 2011 - link
Depends how it is measured, refer to: http://i48.tinypic.com/103a9av.jpg
Another area of concern is the compiler used: http://www.agner.org/optimize/blog/read.php?i=49
At the end of 2010, the situation hadn't improved, I doubt it currently is.
The only gripe I have with how anandtech reviews are structured, is that sysmark is usually the first result someone sees, and also the breakdown of those results are given the same weight as separate benchmarks in some reviews and in "bench".
JarredWalton - Tuesday, June 21, 2011 - link
SM2007 may be first, but we also don't pay that much attention to it in the text, particularly of late. I know Anand has noted in the past that SM07 doesn't tend to scale with CPU performance or other factors very well, in part because the constituent applications are now about six years old (SM07 mostly uses apps from 2005 or so). I personally don't like it because it's a serious pain in the rear to run -- you have to be running a vanilla Windows Vista install (no SP1 or most of the patches, and no other applications). I can't imagine SM12 is going to be any better.PCMark is a ton easier to run, and IMO the resulting performance is about as relevant... which is not to say it's particularly useful. PCMark Vantage and 7 are totally skewed by SSD performance, and even then the individual scores are still skewed by various items. I've started including the full set of PCM7 results for laptops, but Intel's Quick Sync is used in the Computation score and ends up being twice as fast as anything else. The remaining six tests all have storage elements so that using an SSD over an HDD can improve the scores by 50%. Ugh.
But then, tests like Cinebench and x264 encoding aren't necessarily great either. They're heavily-threaded, but I'd say 99.9% of computer users will never use a full 3D rendering application like Lightwave or 3ds or whatever, and probably 99% will never do video encoding other than to convert their iPhone/smartphone clip into a YouTube upload. So how do we measure if a PC is actually useful for most people? Well, we can't really, other than to just give a subjective impression of the performance. In that case, as I note in the news post, Anything faster than Llano is usually sufficient for most people.
In the end, AnandTech focuses on the enthusiast market where things like raw CPU, GPU, SSD, etc. speed matter to people. We assume that our readership is well enough informed to know that just because an Intel i7-980X places at the top of most CPU performance results in Bench, it's not a CPU we actually recommend for the vast majority of people. All things being equal, yes, I'd like a faster CPU, but my "old" Core i7 Bloomfield is plenty for me, and even my 3.2GHz Core 2 Quad Kentsfield that I use for my work PC is plenty fast. And my C2Q with an HD 5670 is still faster than Llano, though it uses more power I'm sure. Heh.
Janooo - Tuesday, June 21, 2011 - link
"Anything faster than Llano is usually sufficient for most people."So Llano is not fast enough for most people?
WTF???
JarredWalton - Tuesday, June 21, 2011 - link
Including Llano. Try not to get offended so easily; it's the Internet, after all. :-)Spoelie - Tuesday, June 21, 2011 - link
I realize one can build a case against any benchmark, but if it only runs on a very specific configuration nobody sane would use, I don't see the point of running it at all. There are so many things that could influence the relative standings: drivers, bugfixes, scheduling optimizations, library/compiler updates, ...While you're certainly right with regards to Anandtech readership, sometimes it's hard to use the site as a reference when trying to inform someone making a buying decision. In the bench example, most would look at the cpu that wins the largest proportion of tests as the better cpu, regardless of what each individual test is labeled.
Anyhow, IMO the fact that this is happening in the run-up to bulldozer is probably not a coincidence
ajcarroll - Tuesday, June 21, 2011 - link
The article refers to Nigel Dessau as CFO. His title at AMD is Chief Marketing Officer.Thomas Seifert is the CFO (as well as currently holding the position of interim CEO).
JarredWalton - Tuesday, June 21, 2011 - link
Fixed, thanks.Scabies - Tuesday, June 21, 2011 - link
is it too much to ask to have benchmarks measure more abstract things like FLOPS, latencies, IOPS? While SuperPi (and similar) don't really tell you how many iterations of Youtube you can have running while compressing files and playing Crysis, its a very clear-cut "this is what this means" benchmark. More of those, plz.(also, how come no review ever uses the Windows Experience Index? those results may be somewhat vague, but it should be a pretty platform agnostic way of ranking things. and hey, no license needed!)
Spoelie - Tuesday, June 21, 2011 - link
Most modern enthusiast processors score the maximum amount of points. How are you going to differentiate between those?As been mentioned before, any midrange computer bought in the last 5 years is plenty fast for running every day windows (which is what that index measures, really).
JarredWalton - Tuesday, June 21, 2011 - link
Ugh... don't even get me started on the WIE! Let's see: DX9 GPU is automatically limited to a score of 3.5 or less, if I'm not mistaken. DX10 is limited as well, but I think it might be to 5.9 or something. Basically, WEI considers old technology as always being inferior to new technology on some of the tests. The memory score doesn't tell whether you're high bandwidth and high latency, low bandwidth and low latency, or somewhere in between -- and it benefits a lot more from tri-channel RAM than regular applications (and I can say the same for WinRAR/WinZip type tests).Mostly we don't run the theoretical stuff because it doesn't show how everything works together; FLOPS tests usually only measure a specific kind of FLOP, and they run almost entirely within L1 cache which isn't very realistic. Real applications are best, but as noted they're sometimes hard to qualitatively benchmark.
arthur449 - Thursday, June 23, 2011 - link
I prefer the term "Windex" for WEI.BSMonitor - Tuesday, June 21, 2011 - link
So AMD wants benchmarks to include updated revisions that take advantage of forward thinking hardware advancements.Anyone recall NetBurst first iteration. SSE engines to replace some of the legacy x87 floating point operations. AMD = "Oh, look, our Athlons blow away P4's in x87 floating point performance, but don't recompile for the new SSE instructions in the P4."
What a joke. This company is like a whiny spoiled little kid. "What about me!" It's like the Boston Red Sox vs New York Yankees rivalry. So sick of the whiny, "what about me" kid. Sorry Boston, you aren't the Yankees.
DigitalFreak - Tuesday, June 21, 2011 - link
The losers always whine, no matter who it is.Griswold - Tuesday, June 21, 2011 - link
Very true. Well, Intel spreads FUD when it doesnt go their way. Latest example? Windows 8 support for ARM. The bullshit claims Intel was quick to spread were furiously disputed by Microsoft...At the end of the day, they're all the same, just in different flavours.
Targon - Tuesday, June 21, 2011 - link
Firefox 4 has been around for a while, so any NEW benchmark should use it, not Firefox 3.6. IE 9 is also the most recent version, so should be used, not IE 8. The fact that these are very common and popular browsers and both use GPU acceleration would have been a benefit to AMD, and if most people only care about web browsing and perhaps some word processing, it makes sense to use what most people are using.Then you have Flash, and as much as some people may hate it, I expect that upwards of 90 percent of people on a Windows platform will have Flash installed and enabled. Flash is also GPU accelerated, meaning Intel would look REALLY bad using Firefox 4 with Flash content if that was a huge part of your benchmark. I bet Intel would complain if web browsing tests heavily weighted Firefox 4+Flash performance and it showed Intel was slower.
dustwalker13 - Tuesday, June 21, 2011 - link
but Nvidia and VIA as well, leaving only Intel in the Industry Group BAPCo making it a group of one ...If this is true, you should update your article since 3 of 4 companies dropping out does imply somethings really fishy with sysmark, while AMD dropping out against Intel does only imply AMD is afraid of the results of the new version of the tests for their new cpu-designs.
jjj - Tuesday, June 21, 2011 - link
The most interesting part of this news is that Nvidia and VIA left too so it's not just AMD going crazy.Anyway,this benchmark is pointless,it doesn't take into consideration just the CPU or all the hardware (from USB ports to screen and Wi-Fii),it's unclear what kind of usage paterns are factored in and it gained no traction in the community.
Not very sure why it is being used in CPU reviews when it's not supposed to reflect CPU perf anyway.If it was any good it could be used in pre-built systems reviews but it's not so why bother.
Since i mentioned Wi-Fi testing,maybe you guys could include that in notebook reviews and maybe HTPC tests in all APU and GPU reviews( since for many their main PC is also their HTPC).
BLaber - Tuesday, June 21, 2011 - link
Hope Anandtech can also stop using this Benchmark now , its useless really.Mr Perfect - Tuesday, June 21, 2011 - link
I never read that part of the articles anyway, same with 3DMark. The number of theoretical marks something gets just doesn't help in the real world.arthur449 - Thursday, June 23, 2011 - link
I agree. The entire system benchmarks are much less relevant to my interests than the specific application tests. We do a lot of encoding using x.264, so the comparative AnandTech 2nd pass tests are a goldmine for us.These Borg collective benchmarks would be much more useful from a financial point of view if they stopped trying to collect all the results into one trademarked point value. They already conduct a number of tests using hardware traces of professional software that is financially unreasonable for an average consumer or hardware reviewer to own. If they make these individual tests center stage and give us results in seconds or data / operations per period of time then buying Borg Benchmark 2012 ("End of the World Edition!") for $100-150 is a steal compared to even a single student license of whatever Autodesk is punishing its customers with. I would rather read about individual program scores than see some giant meaningless consumer-friendly score.
Likewise, I value the reviewer's words when explaining the charts more than the charts themselves. If it's a page of Borg benchmark charts and a paragraph at the bottom, I skip to the bottom. If the reviewer remarks that a particular product scored well in a certain test, I go back up and look at the chart.
Gigantopithecus - Tuesday, June 21, 2011 - link
I think a better comparison than 'whiny baseball fans' (...?) is human intelligence. As computers continue to develop, there are more and more facets to their performance. Like human IQ, one single number like SYSmark doesn't really reflect what a computer is capable of doing.AMD's CPUs can't touch Intel's in terms of sheer compute performance, period. Bulldozer might change that, but for now, it's a fact. However, as Jarred alluded, even the $60 AMD Athlon II X2 250 is sufficient for most desktop computer user's needs. Think of it like this: you're running a company and need employees. If Intel is the applicant with an IQ of 140 and AMD is the applicant with an IQ of 100, who do you hire? For most positions, you're going to hire the person who is 'good enough' - you can pay them less, and they'll get the job done.
Obviously computers aren't as nuanced as people, but AMD does in fact field some real strengths compared to Intel, especially in terms of pricing. For example, the applicant with an IQ 140 might also have some glaring personality flaw, and the definition of average applicant might be fun to talk with at lunch. But you don't see those important considerations in one number like IQ. SYSmark does not accurately represent AMD's dominance in graphics, which is an important aspect of the whole computing experience.
AMD is not 'whining' about SYSmark because it has less powerful chips. Its problem with SYSmark is that it's an increasingly meaningless single number that doesn't reflect how CPUs, and now APUs, have developed in the last few years. Hell most salespeople don't really understand benchmarks, and even many of AnandTech's forum users place bizarre emphasis on certain benchmarks. Of course AMD is not going to support a benchmark that they (rightly) believe doesn't paint an accurate picture for consumers. I build a lot of systems for the average user: internet browsing, email, office productivity, Facebook games, maybe some streaming HD content. An AMD Athlon II and an SSD cost about the same as the least expensive Core i3 and a mechanical hard disk. The AMD + SSD combo is a MUCH better experience for the average user because the system just feels snappy and immediately responsive, whereas the i3 system with its platter-based drive is sluggish by comparison.
IMHO, Intel continues to deftly leverage its strengths: manufacturing and compute power. AMD can never hope to beat or even rival Intel at its own game, and therefore chose a different path: address the weak links in the whole system. Right now, that's graphics.
Finally, if you honestly expect a benchmark to be unbiased when its President is the 'head of benchmarking' or whatever at Intel, I've a bridge you might be interested in buying. :P
Spazweasel - Tuesday, June 21, 2011 - link
If Intel is the only one left in BAPCo, and their chief benchmarking guy runs BAPCo, how can anyone say that SysMark can be trusted? That there isn't a huge conflict of interest that BAPCo is neck-deep in? No Via. No AMD. No nVidia. What's left? SysMark is now just a rubber-stamp.SysMark just became a waste of tester time and website bandwidth. Jarred, Anand, et al., just don't bother with it. You've got a limited amount of time to run benchmarks, and you need to choose for credibility. SysMark no longer has that credibility. I'm sure you can come up with something that's much more vendor-agnostic.
frozentundra123456 - Tuesday, June 21, 2011 - link
Another case of whining from AMD because their CPUs do not measure up. There are plenty of other tests to measure GPU performance and AMD would be favored in those. Just put out a competitive CPU for goodness sake!!!Germanicus - Tuesday, June 21, 2011 - link
Then I guess Nvidia and VIA are whining too, but perhaps you wouldn't know that since Anandtech neglected to title the story properly.whatthehey - Tuesday, June 21, 2011 - link
I take it all these "NVIDIA and VIA left as well!" posts are referring to this:http://semiaccurate.com/2011/06/20/nvidia-amd-and-...
If you don't have any more credible source to point to, I wouldn't get carried away with lambasting Anandtech for not talking about anyone other than AMD. Perhaps NVIDIA and VIA will actually leave, but by my estimates I'd put Charlie over at SA batting at maybe .200. He also hates Intel as much as many of you people, so perhaps that's why you all like to read his site?
"If a salesperson comes to your company and mentions the suite, you know who they are pushing, it is that bad." What that really means is that most salespeople push Intel-based servers and PCs. Probably because they're faster in the tasks that 99% of businesses do regularly -- because we all know how often businesses want games to run better!
AnandThenMan - Tuesday, June 21, 2011 - link
Anandtech does mention at the bottom that Nvidia has also left. The problem is, the articles title ONLY states AMD. An update would be in order, especially after it is will be confirmed that VIA has also backed out.silverblue - Tuesday, June 21, 2011 - link
It is entirely conceivable that Bulldozer does not sufficiently address those areas which Intel appears to be dominating, however benchmarks really should be vendor agnostic.AnandThenMan - Tuesday, June 21, 2011 - link
I think AMD should write their own bench that takes full advantage of their APU, and Anandtech should put this chart at the top of every review. Intel can stop their whining and make a proper GPU for goodness sake!frozentundra123456 - Wednesday, June 22, 2011 - link
I am not sure if you are being sarcastic to AMD or to me about my post. Anyway, at least for the desktop, I think Fusion so far is a failure. The GPU performance is not that great, and the CPU is clearly inferior to sandy bridge and maybe an earlier generation or two as well. Intel decided to not go into the GPU business, but they are very good at their primary business of making CPUs. How many generations ahead of AMD are they now?? Two, maybe 3 depending on when Bulldozer and Ivy Bridge come out and the relative performance.So just get a superior CPU (Intel) and add a 50.00 graphics card. You will have performance superior in both areas to Llano. However, I do see a place for Llano in the notebook market (where it is not easy to upgrade the graphics) if the price is right.
And I am not an intel fanboy. The first computer I had that was really suitable for gaming had an AMD processor, and I would like for them to put out a competitive product. So far I have not seen this in the desktop since the Athlon 64 days, yet they continue to hype their products, and somehow many people still think they are doing a great job.
AnandThenMan - Wednesday, June 22, 2011 - link
Fusion so far has been a runaway hit, AMD is selling every piece of silicon made for them. BTW, how many generations behind is Intel on graphics? 5? 6? How about we just say Intel is hopelessly behind in graphics and leave it at that.Brunk - Wednesday, June 22, 2011 - link
Let's assume that we do, in fact, buy an intel CPU, and then add in a 50,- GPU.As i'm from Belgium i'll use european prices:
according to fudzilla, the top desktop Llano APU will be the "A8-3850 APU featuring an HD 6550D" @ €130.
Now, as you say, we detract from that €50 for the GPU.
So what can we buy for 80€ from intel? well not much.
We could go with a Pentium G840 (http://www.alternate.be/html/product/CPUs_Socket_1...
I wonder who comes out on top.
back on topic..
I don't think anyone can hold it against AMD, nvidia AND VIA to stop supporting Sysmark if they feel like it is no longer a valid benchmark.
Actually, it's intel that's the odd one out. they are the ONLY company that still thinks this benchmark is up to date with reality.
And as for anandtech, i've been reading this site for a long time and in my personal opinion: stop with making the first 4 pages of ANY review bullshit synthetics, please
Germanicus - Tuesday, June 21, 2011 - link
Jared, this really should be titled something other than what it is currently..."AMD, Nvidia, VIA Resign from BAPCo Over SYSmark12 Concerns" would be much better.
JarredWalton - Tuesday, June 21, 2011 - link
The news release came via AMD PR, and didn't mention anyone else leaving. I'd have to dig around for additional details and I don't want to get too far out there. Still, rather funny in a way to have a "industry group" with only one participant. :)Donnie Darko - Tuesday, June 21, 2011 - link
So let me get this right. You got some information emailed to you and you post it without checking the story because you don't want to do the job you are paid for?How is this reassuring to the readers here? Is this the case for all the stuff that gets emailed to Anandtech?
Spoelie - Tuesday, June 21, 2011 - link
There's no "checking the story" necessary, this is an official press release from AMD. Both NVIDIA and VIA have not issued press releases as of yet, any other information is conjecture.JarredWalton - Tuesday, June 21, 2011 - link
What exactly am I supposed to do? AMD sent me a news release saying they are leaving BAPCo and I wrote an article about that. The fact that NVIDIA is apparently leaving as well (with absolutely no reason given) is nice to know, but that's not necessary for this piece of news. Checking the story? Um... which part of "AMD PR sent it to me" do you not get? AMD sent it, and it's about AMD, so I'm pretty sure I have the story I need. It took NVIDIA several hours to get back to us with, "Yes, we are leaving." VIA, we still haven't heard from. I'd better pull this story for six hours while we wait for more details, because other people leaving changes... nothing.Brunk - Wednesday, June 22, 2011 - link
it changes a lot. as was already stated before now it seems AMD is leaving due to issues with their own product instead of the benchmark itself.When you know EVERYONE is leaving it tells a different story altogether
Donnie Darko - Wednesday, June 22, 2011 - link
To start as a review site (not micro-blogging or news [such as DailyTech]) I would start by never publishing a press release verbatim or write and 'article' with one as the soul source of info. Otherwise you end up with this: http://semiaccurate.com/2011/06/16/intel-declares-...which turns into this quietly: http://www.tomshardware.com/reviews/intel-motherbo...
by which time most sentient life has already stopped reading your site (which wouldn't matter) and you loose page views and so add dollars (which does matter).
Now that I've answered your question, I'll take the time to refute your assertion. You did not write an article about 'AMD leaving BAPCo'. This information showed up in your article, but what you wrote was an article about benchmarking software; your opening statement "What’s in a Benchmark? This is a pertinent question that all users need to ask themselves, because if you don’t know what a benchmark actually tests and how that relates to the real world, the scores are meaningless."
You go on to discuss the limitations of benchmarking software and do some editorializing on different chips and platforms (Atom vs Brazos, Llano vs SB and Discrete etc) and then attempt to justify your continued use of the inappropriate tools. All of this is fine.
Scattered throughout though, you leave journalistic integrity (and facts) behind and begin to make assumptions about an event that you've obviously not investigated or understand. "Reading through AMD’s announcement and Nigel’s blog, it’s pretty clear what AMD is after: they want the GPU to play a more prominent role in measurements of overall system performance."
That quote is what is called a lie though omission. You picked one of four key things to focus on (heterogeneous computing) and failed to even mention the others. The real list is: failure to have an open benchmark (to review what's being tested), failure to use representative work loads (heterogeneous computing), bias to Intel designs, and generation of misleading results.
That you picked the least important of the listed reasons is telling (neither option is kind so I won't call them out explicitly), as while AMD would love to have a major benchmarking software focus on what they do well to the exclusion of their competition, this hurts them much less than the other three.
Failure to be open: If the result says CPU X is 20% faster than CPU Y but doesn't tell you what it's benching then it's meaningless. If CPU X is 100% faster than CPU Y at task N and task N gets 65% of the weighting but task N has no real world relevance to customer P then CPU X isn't better than CPU Y for customer P. This is very important.
Bias to Intel designs: this shouldn't need additional explanation.
Generation of Misleading results. This ties into the above two, but has to do with the over all packaging, of the benchmarks than anything else, so gets its own category.
At the beginning of the article you also mention that Intel will be faster than AMD in SYSmark and then spend a good chunk of the article defending the future use of SYSmark. Despite all the other text this is editorial bias and doesn't happen with good/careful authours. It leads to people thinking that Intel is a better CPU regardless of surrounding details and suggests that AMD is complaining only because they are worse.
Here's where we get to the 'it's important to do your research before publishing a story'. The fact that AMD and NVIDIA and VIA dropped out of BAPco is of critical importance. NVIDIA is a graphics manufacture in this space (no CPU intrest) and VIA has no graphics interest. This stops being a case of a poorly performing product being massaged by PR to a legitimate concern about SYSmark. 2/3 parties with interest in x86 compute left over concerns about the validity of the product, and 2/3 parties with interest in x86 gfx left over concerns about the validity of the product. It needs to be a cold day in hell for Nvidia to line up and say AMD is right.
Others have been very critical of BAPco too such as the guys at opensourcemark who have documented examples of SYSmark heavily biasing results to fit Intel's designs.
For the why any of this matters beyond personal integrity (which I freely admit doesn't mean anything on the internet, you pay for the server so you get to say what you want) we have to look past Intel's behaviour (which i think is fine, they are in it to sell chips, so by all means make 'tools' that make your product look good) and at what Anandtech does.
You benchmark and review hardware. If a key tool that allows you to carry out your jobs and run your business comes under question you normally do everything in your power to check the veracity of the tool. If someone in the pharmaceutical industry tells you the cholesterol test your doctor just gave you is really only designed to sell more medication and the results are really biased, you'd expect your doctor to have checked into this for you. While you are thankfully not responsible for anyone's life, people still trust you with their purchasing decisions based on the work done at Anandtech. Given how much time the site spends parroting about how unbiased and fair you are, how you try to use tests that give meaningful results and aren't swayed by PR, these issues are serious for you.
So what are you suppose to do? Do your job. That means a little research, maybe run some tests of your own. SySmark says Intel CPU X out performs AMD CPU Y at Excel by N% you can test that. It turns out that you do this for a living, and have access to the gear to try CPU X vs CPU Y. Run a montecarlo simulation, sort a very large data set, run some macros. These are easy things to do while confirming facts about a story. That way when you sit down and write an article your readers get the story, the facts and reliable conclusions.
If you want to write for Endgaget (what you presented in this article) go write for Endgaget. If you want to be a PR rep for a company then go work in PR. If you want to write hardware reviews, then you need to actually stick to the tenants of your job all the time. You don't publish press release performance predictions, so why publish press release benchmark predictions?
Daniel
PS: VIA has also publicly confirmed it has left BAPco.
JarredWalton - Thursday, June 23, 2011 - link
Amazingly enough, we are a technical site that often runs news stories, and the fact that AMD left BAPCo is pretty big news. Yes, even AnandTech has tech stories similar to what you might find on Engadget or DailyTech. You might look here, for instance:http://www.anandtech.com/news/
I don't generally try to read what everyone else writes about a particular piece of news and then echo the thoughts of the market; I think for myself and provide my own technical analysis. So when AMD says that they don't like the latest SYSmark, immediately the first thing that comes to mind is, "Gee, I bet it doesn't favor their APUs as much as they would like." Whether that's good or bad is a different story, but to pretend that AMD isn't politicking is ludicrous.
My assertion is that you need to know what every benchmark does in order to determine whether the results are meaningful or not. I don't care if it's SYSmark, PCMark, Cinebench, 3DMark, SunSpider, or whatever. That AMD is leaving because they disagree with how SYSmark 2012 works is fine by me. I disagree with lots of benchmarks as being meaningful (Sandra and SuperPi immediately come to mind). Running benchmarks for a news story, especially when I may not have appropriate hardware on hand, doesn't work.
So now VIA and NVIDIA confirm they have left, but no one is really saying why other than VIA apparently saying they don't feel the workload SM12 measures represents a modern user or whatever. I still won't run SM12 on laptops, just like I didn't run SM07, because it's a royal pain to do so. You need to do a clean install (no service packs or other patches), and even then it doesn't always work. Anand can do it on desktops because he doesn't have to change them up every single review. Even so, including SM12 doesn't make an article any worse, unless the article were to then conclude that because SM12 favors CPU x, you should buy CPU x.
If the results from SM12 correlate with what we see in other CPU-centric tests, that's fine by me. As long as you understand it's a CPU/general performance metric and makes no demands of the GPU, you know what you're testing and what the results mean. Testing Cinebench and then complaining that it doesn't measure SSD performance would make as much sense to me. When I think of system performance (which is what SYSmark purports to measure), mostly I'm looking at CPU, storage, and perhaps a bit of GPU.
As I mention elsewhere, most of the people I know (i.e. not computer enthusiasts or gamers) still have no need of a good GPU. GPGPU isn't even remotely mainstream, video works fine on SNB (good enough for everyone besides HTPC purists), and the only thing Llano really does substantially better than Intel is running games on the integrated graphics. Frankly, Llano really isn't a good GPU; it's just a good integrated GPU that's only as fast as a $35 discrete GPU. If the drivers get worked out, Llano could make for a good HTPC setup as well, but right now that's not happening either. So, Llano is good for laptops, but on desktops it just doesn't mean a lot unless you absolutely refuse to buy a dGPU.
whatthehey - Friday, June 24, 2011 - link
Hey Donnie, can you comment on this?http://www.brightsideofnews.com/news/2011/6/24/amd...
Sounds like the lack of GPU aspects in SYSmark 2012 isn't actually the problem if that article is true (and let's be honest: it has plenty of parts that ring true). The problem with SM12 is that Bulldozer is going to suck, and SYSmark just points this out.
So pull your head out of AMD's ass. This announcement comes from AMD's MARKETING DEPARTMENT. Do you need anything more to prove that this has nothing to do with engineering and architectural superiority? Marketing is trying to get people LIKE YOU to ignore any benchmark that shows how badly AMD's CPUs are falling behind. And it's working. Enjoy your new Bulldozer system, to replace your amazing Phenom II system. Me, I'm going to continue running my Core i7 for a while longer, and probably upgrade to Ivy Bridge or its successor, because Bloomfield is already going to beat Bulldozer, never mind Sandy Bridge and Ivy Bridge.
saywut - Sunday, June 26, 2011 - link
That article is pure garbage, the tone of the writing and the "anonymous source" should make it pretty obvious that you're looking at is somewhere between propaganda-grade and tabloid-grade. Besides, compare Intel's Sysmark superiority to any other real world benchmark, the Sysmark score is always disproportionate to real life, and always in Intel's favor. To say AMD doesn't have a legitimate reason for leaving is absurd, and then you blame the victim by accusing AMD of trying to do exactly what Intel IS doing.Donnie hit the nail on the head, as evidenced by Jarred's back-pedalling reply being longer than the actual article. Besides, the lesson we've learned from this is that AMD's "old" K10.5.2 architecture in Llano isn't that far behind SB, unless you build machines to run Sysmark. If Bulldozer makes any improvement at all, then it'll be just that much more competitive.
alpha754293 - Tuesday, June 21, 2011 - link
I think that ALL benchmarking is subject to the eye of the beholder.For example, in my current work right now, we run finite element models using Nastran and LS-DYNA. I've also used to run computational fluid dynamics codes such as Fluent and CFX.
But most CPU benchmarks are rarely as intensive as those programs, and even LINPACK -- depending on how you write/compile/run it - it will have an impact on the results.
It is for this very reason why those software vendors have developed "standard" benchmark cases that tests a variety of things including hardware and software improvements/features.
The down side is that a lot of those programs are a) expensive and b) the benchmarks themselves are time consuming. (~25000 seconds for a 3-car crash model, or about 7 hours for one run). So, if you're testing a bunch of new processors, and you're testing the scalability of the additional cores (for example), you can easily spend between 2 weeks to a month just with that one program, running that ONE test.
Another example is the in the realm of graphics processing. While most people test with computer games, again, because of what I do; we use Altair Hypermesh/Hyperview. When you're looking at a model with 2.2 million nodes; that's a very heavy load for a GPU to handle. And I can almost assure you that even the best, the top of the line current generation consumer cards won't be able to take that kind of loading in stride while getting insane framerates. And while the point about those cards not being designed for such a workload is a valid one, why should you pick a benchmark that caters to what the card does well?
That's like you're going to measure 0-60 performance against a drag racer, and then complaining that it isn't design to turn, so you're not going to put it on the Nuerburgring.
Targon - Tuesday, June 21, 2011 - link
If you look back at the initial release of Windows Vista(as much as some people hated and still hate it), it did bring 3D to the desktop with the Aero theme. This is where having a better GPU really stands out, and still does. Move a window around, and visually, a better GPU with Aero does improve the experience.Yes, it's minor, but the fact that we have GPU power improving the desktop DOES mean that GPU power should not be discounted. There are other areas where GPU power comes into play, and ignoring the overall feel of how good it feels to use the system SHOULD be a part of what benchmarking is about. As the article stated, Firefox 4(and 5 beta) plus IE 9 and other applications also use GPU power to improve performance. You want to do anything involving graphics, and the GPU can really come into play, so why not make it as important as how quickly a spreadsheet calculation can be run?
Alexvrb - Tuesday, June 21, 2011 - link
Yeah, it doesn't exactly instill you with confidence when no major GPU vendor will endorse the software.Xyllian - Tuesday, June 21, 2011 - link
I don't want to come of as a total tool with my first comment here but the title is probably the most misleading one i have ever read."AMD resigns from BAPCo, NVIDIA joins them" makes it seem like AMD left and NVIDIA joined BAPCo doesn't it?
"AMD resigns from BAPCo, NVIDIA leaves too" would be 400 times more accurate and impossible to misinterpret.
On the topic at hand however, did anyone trust SysMark before? Since 2002 Intel has been making the test continuously more biased towards their processors. Everyone should know this.
Allswell - Tuesday, June 21, 2011 - link
The simple fact is, if you want to have the best PC experience you can get... you use an Intel CPU.Regenweald - Tuesday, June 21, 2011 - link
Indeed, buy an intel cpu and do spreadsheets all day long. As long as you don't use the graphically intense internet, value quality visual entertainment or game casually or seriously, you'll be fine with HD 3000 integrated.Targon - Wednesday, June 22, 2011 - link
That's a joke, right? The low-cost Intel based machines(Under $550 USD) may have faster processors in them, but the rest of the components in the system are cheap and have many problems. To get many low-cost Intel based machines to be stable, doing things like turning off the power conservation stuff, keeping the machine from going to sleep(since in many cases waking up from standby causes system instability), and other things.AMD based machines may have a slower processor, but the supporting components in the low-cost machines really are better quality overall.
Allswell - Tuesday, June 21, 2011 - link
Jarred,Are you sure you are accurate in saying Llano surpasses Sandy Bridge in Video prowess?
As I understand it, the only thing AMD has going for them is 3D graphics.
JarredWalton - Tuesday, June 21, 2011 - link
I suppose the "video" part is debatable when comparing Llano with SNB, but Brazos is clearly superior to Atom in that area. I'll defer to Ganesh for the video stuff; I believe he is working on an article showing how Llano fares in various HTPC scenarios.mino - Friday, June 24, 2011 - link
Sandy Bridge vs. ANYTHING current from AMD/NV:2D: SB is a bit slow and buggy
3D: SB is slow, CHOPPY and BUGGY (well, compared to previous Intel efforts, it hass really good drivers...)
2D video: SB is OK but the image quality in line with Geforce before the FX series.
Amateur video encoding: SB wins easily thanks to QuickSync. Not thta this has anything to do with Graphics ...
Calling Anandtech's assessment of SB vs. Llano graphics unfair to Intel is crazy BTW. AT is one of the biggest fans of GPU awesomeness SB brought.
So when AT is forced to admit SB IGP is not better than Llano IGP, it is really something ...
bgold2007 - Tuesday, June 21, 2011 - link
I don't see any comments about software versions. The Anandtech article mentions old versions of IE, etc.As a low-level employee of a local government, we use older versions of software - XP Pro, older IE etc.
Office 2010 is soon to be rolled out. And the BAPCo response does mention Sysmark is for organizations.
So maybe it is more useful for governments and large, slow-to-upgrade organizations, and less useful for on the ball individuals and small businesses. Plus, remember the adage 'no one ever got fired for buying IBM".
No one will get fired for buying Intel either.
I think AMD Nvidia et al did the right thing; consumers have to be informed and decide on their total needs.
frozentundra123456 - Wednesday, June 22, 2011 - link
what I meant by that statement is that AMDs attitude seems to be to talk up their products, then be late to market with a product that does not measure up to the hype.And when they get criticized, they say they are still better, you are just using the wrong test. You hurt my feelings (made my product look bad), so I am just going to ignore your.
Targon - Wednesday, June 22, 2011 - link
No, AMD has the attitude that the entire system is what people use, not just a single component at any one time. Now, if you saw a gaming benchmark that tested both CPU and GPU performance, but then discounted the CPU results, wouldn't that leave Intel looking like the platform to ignore?The problem isn't that Sysmark ends up showing Intel being ahead, but that in any benchmark that shows AMD doing well, it ends up not really affecting the OVERALL score.
When it comes to doing web based benchmarks, it stands to reason that Firefox 4(which came out back in March) would be used since it has been out for three months, and IE 9 would be used as well. Both of these browsers have been out for long enough where a NEW benchmark that is labeled as 2012 would use them.
I agree with other posters when they say that tests like doing an Excel calculation on 32,000 rows should be seen as not being terribly important, so why let THAT test result have a higher value in the overall result than some other tests?
cbass64 - Wednesday, June 22, 2011 - link
I run sysmark'07 all the time and the GPU plays a HUGE role in the score. Out of 4 categories, 2 are GPU bound. You can add ram and a faster CPU and the scores hardly increase, but if you change the screen resolution or update a video driver the scores dramatically change. I would have thought AMD would like the fact that sysmark is so GPU bound...If 12 is anything like 07, you'd think GPU oriented companies would be drooling over it.
Targon - Wednesday, June 22, 2011 - link
I suspect that the 2012 version discounts the graphics related stuff when it comes to overall score, which WOULD make AMD walk out.redisnidma - Wednesday, June 22, 2011 - link
I was really expecting more from you Jarred, but it seems to me that you're also on intel's payroll.How do you dare compare Llanos powerful GPU to intel's crappy graphics in anyway?
And as much as you (and Anand) hate it, GPGPU is becoming part of modern-day apps.
CPU performance still matters, but GPU performance matters more (if you really did your homework, you'd know what I'm talking about).
Shame on this site.
JarredWalton - Wednesday, June 22, 2011 - link
Name one major task that every day people actually do that needs GPGPU. Yes, you can do some amazing stuff with such chips, but for my mom who only checks her email and forwards spam my way, what does she miss out on? Llano's GPU isn't that great; it's merely better than Intel's IGP. Llano is basically the equivalent of a $35 GPU with a $75 processor, but with better power characteristics. We provided a complete review that showed where Llano does well, but while the Llano GPU is three times faster than the previous HD 4250, it's only about 30-50% faster than HD 3000.Need some more food for thought? GPGPU performance running on an HD 5670 is more than double what Llano can provide. Anyone that cares about GPGPU is still going to need a discrete GPU. But you have to go and accuse us of "hating" GPGPU. Um, what? Where did I say I hate GPGPU? FYI, I'm running about 2200Mhash/s on ten varieties of AMD GPUs. (Yes, AMD, whom I "hate" -- because for this particular task they're about 3x to 5x faster than NVIDIA, and my overclocked i7-965 would be pulling maybe 21-22Mhash/s, or about twice as fast as an AMD E-350 that uses 1/10 the power.) You wouldn't know that, of course, because it's irrelevant; it would only be relevant for people that want to build a farm of PCs to do GPGPU computing.
Your talk of bias is unwarranted; there's nothing in this news post that's even remotely biased. I point out that AMD has issues with SYSmark, and I point out what those issues are, and I even show how AMD has plenty of facts to back them up. And you accuse me of bias against AMD for... what? Pointing out that they're right? That SYSmark 2012 is just a benchmark that only shows a few facets of performance, and if those facets are chosen "correctly" the result can be manipulated to put one part above another? I guess it's also bias when people point out that 3DMark runs poorly on anything without a discrete GPU, and that 3DMark11 requires DX11 support.
If SYSmark 2012 is a bad test because it puts more of an emphasis on the CPU and benefits Intel, then a gaming test that puts more of an emphasis on the GPU is equally bad. We use both types of tests for a reason, because both CPUs and GPUs (and SSDs) make a big difference in what a PC can do well and where it struggles. Yes, that's totally biased and I should be ashamed for saying such things!
Regenweald - Wednesday, June 22, 2011 - link
Jarred, you guys tested a DX11 APU in unsupported DX9 mode for your review of Llano without even mentioning that the SB competition does not support DX11 and that Llano was designed for the new API. I call that bias. I assume when Intel finally has a DX11 product, the test bench will be updated ?silverblue - Wednesday, June 22, 2011 - link
I do wish people would keep comments about bias to themselves. No offence intended, but it only serves to make the complainants look silly and take more time to read through the comments.The fact that SB doesn't support DX11 has been said a million times before, but for completion's sake I suppose it should be mentioned.
With a GPU like Llano's, you're not likely to be gaming at high resolutions or high detail levels with DX11. If, and when, Intel do support DX11, their past track record suggests it'll be of no use for their products either due to the lack of performance, so the comparison isn't likely to be worthwhile, and thus the point is moot. I'd really like to see DX10 and 11 performance figures for Llano but even with the fastest RAM available, is it really going to toss us a pleasant surprise? Far better than SB's HD 3000, sure, but not quite at the same level as a decent mainstream card.
silverblue - Wednesday, June 22, 2011 - link
http://www.anandtech.com/show/4448/amd-llano-deskt...I take it back; most of these should be DX10, even if we're talking low detail settings. DDR3-1866 might make a decent impression.
Regenweald - Wednesday, June 22, 2011 - link
I'm glad you took the time to read the review. Bumping up the API would probably bump down SB's already not so stellar performance, and that would be bad. Far less test in DX11 mode where Anandtech would have to explain to intel why a review was published with only zeroes in the intel column where numbers should be, in a case where ' lower is NOT better'.On the topic of the review itself, It should provide ALL relevant information to a potential customer so that they can make the most informed choice upon purchase. Failing to mention that a potentially sub 700 APU based laptop would provide a more future proof media and gaming path than a SB based laptop and be much cheaper than SB+Discreet.
Such an omission from such an experienced review site seems motivated by external factors.
silverblue - Wednesday, June 22, 2011 - link
In terms of the desktop Llano, Anandtech haven't actually reviewed it yet. In the final review, you can bet we'll see much faster RAM and how Llano can benefit from it. We should also see better and more benchmarks, though I would hope they don't include SysMark... for obvious reasons. :PJustin Case - Wednesday, June 22, 2011 - link
BAPCo and SysMark have been a joke for the past six years. No reviewer with a clue even bothers with it, and no sysadmin gives it any weight when making a decision.The problem isn't just that it completely ignores the GPU (which is a very important part of a desktop PC and an even more important part of devices like HTPCs, tablets, etc.), it's that all its benchmarks are based on completely unrealistic operations, carefully picked to make sure Intel wins by the biggest margin possible. For example, their Excel benchmark is based on the time it takes to load the application followed by an insane amount of sorting operations (which turned out to be the operation where Intel CPUs had the biggest advantage).
The real mystery to me is why AMD and NVidia still had any sort of connection to BAPCo.
yyrkoon - Wednesday, June 22, 2011 - link
I do not really know why you, or anyone uses sysmark or any of these garbage synthetic benchmarks anyway. It's about the furthest thing away from real world performance ALWAYS, You may as well pick some arbitrary numbers, and then go use Sandra to claim you've go the biggest eManhood on the planet. HDtune, HD tach etc makes more sense, but even with these, they are NOT a good indicator of real world performance either. So again . . .In the end, the results only offer an obscure result that only tend to obfusticate the real outcome. SO in other words, someone is making themselves important ( and rich ) by providing totally useless system information to others.
Gaming benchmarks make sense because it can be an important factor for someone buying equipment. Video encoding/trans-coding is important, because again; Different hardware can produce vastly different performance levels. Even Photoshop benchmarks are important to many. But who here runs an app called Systemmark other than for bench marking ? No one . ..
krumme - Wednesday, June 22, 2011 - link
Exactly, sysmark measures something nobody needs.Gaming and video encoding is what matters today, and it can be measured in real test.
If Anand chooses to to use sysmark and whatever like it, it will be bad for consumers, and a small step to keep Intel away from making the most usefull products even if they have the best technology and development ressources in the world. An idiotic outcome for the consumers.
Use tools like the HD benchmark suite you developed. It was a bit heavy on the 4k random write - wonder why - and lol later on post Intel G2 for that, but at least it reflect real usage, and is a valuable tool for the consumers to select ssd.
Targon - Wednesday, June 22, 2011 - link
Gaming and video encoding? Really? Gaming is something that many read Anandtech and other sites to check performance for, but video encoding is something that really does not take up all that much computer time for MOST people. Web browsing benchmarks should probably be up near the top when it comes to overall importance, with watching video content(from different sources) being way up there as well.I can see "time to open and extract files from an archive" being up there as well since if you download drivers, they DO take a fair amount of time to extract. Creation of archives should be taken into account too, but not as much as extracting content. For much of this of course, picking good hard drives and systems with a good SATA controller comes into play.....except, AMD has SATA 3 and Intel does not at this point in the latest chipsets. I could see Intel taking second place to file copy operations if you take the latest AMD chipsets compared to Intel chipsets as a result.
Veroxious - Wednesday, June 22, 2011 - link
Honestly speaking, IMO Anandtech is NOT showing any bias against AMD... anyone thinking that did not read the article properly and/or are nitpicking. After reading the entire article I have to agree that AMD has done the right thing. Bapco in it's current state is just useless and skewed towards Intel.And before someone accuses me of being an AMD fanboy, don't bother. Of the 4 computers I own only one is AMD based. That's simply because for the roles they play in my life and pricing in my neck of the woods the Intel systems offer me more bang for the buck period. I honestly believe that Intel is or was guilty of anti-competitive behaviour and underhanded dishonest business practices in the OEM/corporate space and should have paid a bigger price. It's not like market share can be regained overnight although currently Intel deserve to be dominant in the corporate space due to their obviously superior current CPU's.
In fact I am already planning on upgrading my gaming rig to the Sandy Bridge platform (although Ivy bridge has to be considered ) and I will be buying whatever graphics card/s represent the best bang for the buck at the time of purchase whether it be red or green since bang for the buck is the most important consideration for me seeing that I have 3 computers constantly in use in my household (excluding laptops). Ofcourse most ppl in my situation will opt for medium level hardware i.e just a little more than I require to be adequate for the next 3-4 years.
That said benchmarks from sites such as Anandtech is an important tool for me when speccing a new pc BUT is ONE of many considerations in that process. Only a fool would base their entire spec on a specific benchmark.
The point that many ppl seem to miss is that other than showing one the top peforming hardware it also allows us to see other hardware's relative performance which is what I need when speccing most pc's - i.e how close in performance is a i5-2300 to an i7-2600 or Phenom II 955 BE in a specific task like encoding for instance. That is what I use to spec a system suitable to the main tasks it will perform.
stancilmor - Wednesday, June 22, 2011 - link
People ask which computer to buy and I respond with tell me what you plan to use it for?Both AMD and Intel are right. On AMD's point people really do need more graphics power. And on Intel's point a little dedicated logic (i.e. quicksync) goes a long way.
While difficult to impossible benchmarks should be somehow normalized to performance per watt. Clearly not all case benefit from that, because sometimes raw performance really does matter. Or maybe performace per dollar is a better indicator.
To give an example, I game on a system with a lowly pentium dual core 1.8GHz with 1MB of cache and a nvidia 8800GT 512 at 1920x1200 with all the eye candy (dx9) turned up. Sure could benefit from a faster CPU, but my system is over 24fps sustained...to me if movies are good enough at that speed, the so are games. And as was pointed out in the article most of us need and SSD. I've certainly noticed that when my system us lagging the harddrive light is solid on, not my measely CPU above 80% or my 2GB of system memory all used up.
Maybe for AMD and Intel the benchmarks should gage performance per profit or cost to manufacture, because would be more relevant to them.
Ultimately I want to know raw performance and performance per dollar for the applications that I run
alpha754293 - Wednesday, June 22, 2011 - link
Well, part of the problem is how the benchmarks are performed.For example, 3DMark, SYSmark, etc. are all pre-packaged benchmarks are run in more or less the same way.
However, as I've also mentioned, how those are compiled can have a HUGE impact on performance.
Like, if you take the LINPACK CPU floating point benchmark, and compile it 100 different ways, you're going to get 100 different results. So then that raises the question "what's the actual performance of my system?" And then add to that, the complexity of how each CPU actually executes what it is being asked to do. I'm sure if you profile the LINPACK application while it is running, how it runs will also affect performance.
And that's just a very simple example.
Now you go with some of those aforementioned benchmarks; and the differences can compound and the result can be polar and diametrically opposed.
And then with the programs that I've mentioned that I use to benchmark my systems (and to some extent, Anandtech has ran the Fluent benchmark); not knowing how those programs work (because they're not canned benchmarks) can affect how the results too.
LS-DYNA has conducted testing and research that if you specify a pfile for the MPP decomposition -- it can reduce your runtime by 33%! That's nothing to sneeze at.
ash9 - Wednesday, June 22, 2011 - link
you can do that on anything from a lowly AMD Brazos netbook to a hex-core monster system. Yes, we did leave out Atom, because there are certain areas where it falls shortwould'nt that make the Atom, lowly
asH
ash9 - Wednesday, June 22, 2011 - link
That means Intel is the only semiconductor manufacturer left.Same as it ever was.
Lolimaster - Wednesday, June 22, 2011 - link
Via also confirmrs it left bapco.http://semiaccurate.com/2011/06/22/via-confirms-it...
Roy2001 - Thursday, June 23, 2011 - link
They are all losers, right?dealcorn - Friday, June 24, 2011 - link
bsn reports that discussions with an assortment of AMD insiders indicate that AMD had to kill SYSmark 2012 because it accurately reports that Bulldozer CPU performance is remarkably weak. Apparently, Bulldozer performance was weak enough to kill sales to a large portions of the government market. The world requires a semi credible explanation of why SYSmark death was required and apu/gpu issues provide cover. When the full SM12 whitepaper is released it will be possible to speculate whether same alternate weighting has any material impact on the Intel vs AMD ratings. Bulldozer scores on SYSmark 2012 may speak volumes about the credibility of these reports.whatthehey - Friday, June 24, 2011 - link
Here's a link in case anyone is interested:http://www.brightsideofnews.com/news/2011/6/24/amd...
Would be cool to see AnandTech update this article, but I presume the lack of actualy Bulldozer scores means they won't. We'll see what BD has to offer in... what is it now, September? Maybe they can push it back to November for Black Friday? But if SYSmark 2012 shows poor performance on Bulldozer and Llano, I'm willing to go out on a limb and state that we'll see poor performance from Bulldozer and Llano in PCMark, Cinebench, SunSpider, Photoshop, etc. And you know what will happen then?
All the stupid AMD fanboys will come out and say those benchmarks don't matter either. They'll bitch and moan that AnandTech has sold out to Intel for pointout out that Intel is still faster. And on Bulldozer, Intel is REALLY going to be faster, because AMD won't have the weak "we have a real GPU inside our chips" excuse of Llano. When Bulldozer requires a discrete GPU just like Intel (actually, Intel doesn't, but for gaming it does), then all AMD has to fall back on is their CPU performance. I'm sure we'll see promotion of some benchmark that is heavily integer dependent and makes optimal use of the two INT cores per BD core, but integer performance is really starting to become secondary. The same people who trumpet GPGPU will probalby try to claim that the BD design makes sense, but the BD design is the exact opposite of GPGPU: more simple cores that only do integer ops with one FP core per BD module!
Mark my words: Bulldozer is going to be as big of a disappoinment as the original Phenom, the Phenom II, and now Llano. Llano is only good for laptops; on a desktop you already have far better CPUs (even from AMD), and if Llano's IGP is enough to keep you happy then you don't actually use a GPU much. Here's to hoping AMD's new CEO actually has a clue on how to compete. (Hint: trying to discredit benchmarks that you originally promoted simply because they no longer work in your favor is not competing.)
araczynski - Friday, June 24, 2011 - link
can't say i've ever cared about any of the synthetic benchmarks.when i look at upgrading components i like to compare cpu/gpu fps scores for real games exclusively, since everything else is at best tertiary to me as far as home computer use goes.
cakeab - Sunday, June 26, 2011 - link
--Something unexpected surprise--
Hello. My friend
=== {{w w w }} {{be tter whole saler }} {{ u s }} ====
Dedi cated service, the new style, believing you will love it!!!
WE ACCEPT PYA PAL PAY MENT
YOU MUST NOT MISS IT!!!
thank you!!!
-- w w w . jordan forworld . c o m -
jecs - Tuesday, June 28, 2011 - link
Synthetic benchmarks are meaningless to me and one part I don't pay attention because I can't interpreter one abstract overall number in areas where I punctually looking for efficiency. I read because of the less abstract numbers in games, video or content creation, 3d rendering, productivity, etc And I am almost sure this is what most people come here repeatedly to read.Until I read how a particular processor does on a specific task like 3d rendering I am not satisfied. But also from time to time I download Cinebench, for example and do my personal test.
To be honest if you decide not to include that information I will miss very little.
fteoath64 - Friday, July 15, 2011 - link
When I first read that even Nvidia was leaving, the "alarm bells" started ringing. Now with VIA tossing in the towel, it is very clear something is very wrong here. The remaining companies holds the answer to the problem or were causing the problem.Well, any group of companies who does things in secret is certainly wrong. Secrecy allows for legal exploitation and is normally done in a covert way. What is new in the name of profit ?.
Transparency is the key to success but some monopolies have just bad habits they had to play.