lol, all that you can say about this article is something about AMD. Looks like you need an update on server knowledge, Since the Arrival of Nehalem Intel has the best offer when you need the highest performance parts and when using Low power parts which give still the best performance. Since MC arrived things got a bit different mostly due to aggressive price for all mid value but still a favor to intel parts for highend and L power bins. Certainly in the area of virtualization AMD does very well
What is shown here should be known to many people that design virtual environments, Virtualization and low power parts don't match if you run applications that need cpu power and response all the time, L series can only be very useful for a huge bunch of "sleeping" vm's.
Interesting would be to compare with AMD, but 9/10 both low power and high power intel parts will be more interesting when you will only run 1 tile, the huge core amount lower ipc advantage will loose against the higher ipc/core of intel in this battle.
Excuse me? I am quite aware of low power consuming chips. The point AMD has made in the past four to five years is that low power and high performance can match Intel's performance and still save you money. I have been to a number of AMD web conferences and siminars were they state the above.
I have been to a number of AMD web conferences and siminars were they state the above.
Not sure if you're being sarcastic here, as it's obvious AMD would tell you this.
But regarding the actual question: you'd be about right if you compared K8 or Phenom I based Opterons with Core 2 based ones. And you'd be very right if you compared them to Phenom II. However, the performance of these Intels is being held back by the FSB and FB-DIMMs and power efficiency is almost crippled by the FB-DIMMs. But Nehalem changed all of that.
4-5 years.... Nehalem was launched q12009 since then all changed. Before that Xeon parts suffered from FBDimm powerconsumption and FSB bottleneck and that is why AMD was still king on power/performance and was able to keep up with max performance. Nehalem was king, Istanbul was able to close the gap a bit but missed raw ghz and had higher power needs due to ddr2, again MC parts leveraged back this intel advantage and now there is a choice again, but L power still is king to Neh/Gulf.
Not really. If there can't be two sides to the story or a more diverse perspective, then it should not have been published. Next time, wait a little longer for parts to arrive - try harder next time.
A comparison to AMD would have been nice, but this article is not Intel vs. AMD!
It already has 2 side: high power vs. low power Intels. And Johan found something very important and worthy of reporting. No need to blur the point by including other chips.
I know we have the VMware results but could someone do an analysis on AMD / INTEL chips?
For instance I can get a 12 core AMD chip or a 6 core/12 HT chip from Intel. Has anyone done any test with Terminal Servers or Real world usage of a VM (XP Desktop) with core count?
I would think that a physical 12C vs 6C impacts real world performance by a considerable large amount.
not sure whats going on with your electricity cost calcs on your first page. firstly your converting current unnessacarily from watts to amps (meaning your unnessacarily splitting into US and europe figures).
basically here in the UK, 1kW which is what your your 4 PCs in your example consume, costs roughly 10p per hour. working on an average of 720 hours in a month, that would give a grand total of £72 a month to run those 4 PCs 24/7.
£72 to you US guys is around $110. And I cant imagine you're electricity is priced any dearer than ours.
giving a 4 year life cycle cost of $5280.
have I missed something obvious here or are you just out with the maths?
You are calculating from the POV of a datacenter. I take the POV of a datacenter client, which has to pay per amp that he/she "reserves". AFAIK, datacenters almost always count with amps, not Watts.
With P=V*I at constant voltage power and amps are really just a different name for the same thing, i.e. equivalent. Personally I prefer W, because this is what matters in the end: it's what I pay for and what heats my room. Amps by themselves don't mean much (as long as you're not melting the wires), as voltages can easily be converted. Maybe the datacenter guys just like to juggle around smaller numbers? Maybe the should switch over to hecto watts instead? ;)
I am surprised the electrical engineers have not jumped in yet :-). As you indicate yourself, the circuits/wires are made for a certain amount of amps, not watts. That is probably the reason datacenters specify the amount of power you get in watt.
Watts are universal, doesn't matter if you're in UK, or US - 220W is still 220W, but with ampers it's different. Since in the Europe voltage is higher than in the USA (EU=220V, US=110V), and P=U*I, you've got twice as much power for 1A, which means that in USA your server will use 2A, while the same server in UK will use only 1A...
Watts in a decent datacenter come with power distribution, cooling, UPS, etc. Those typically add 3-4x to the power your server actually consumes. Add to that the amortized cost of the infrastructure and you're looking at 6-10x the cost of the power your server consumes directly.
Such is the fallacy of simplistic power/cost comparisons (and Johan, you should know better). Can we now dispense with the idiotic cost/KWH calculations?
A high-performance server probably can't be used on 1A 230V which is the cheapest options in some datacenters. However something like half a rack or 1/4 would probably have 10A/230V, more then enough for a small servercollection of 4 moderate servers. The big cost is cooling, normal racks might handle 4kW (up to 6kW over that then it's high density) of heat/power just. Then you need more expensive stuff. A cheap rack won't handle 40 250W servers in other regards. 6 kW power/cooling and 2x16A/230V shouldn't be that expensive. Any way you also pay for cooling (and UPS). Even cheap solutions normally charge per used kW here though. 4 2U is about 1/4 rack anyway. And like 15 amps is needed if in the states.
10p per kWh may be low in the UK but it's not for residential in the US. $.12/kWh is average. Just pulled out a bill from earlier this year and we paid 7.3 cents per kWh at my house. What it comes down to is the data center potentially overcharges people in the, using your numbers, 19 to 38 cents per kWh range, but rates can be higher than $.20/kWh in high density areas like NYC or SF. The extra costs should go to paying for upgrades and expansion of their infrastructure so it's not unreasonable.
Worth mentioning to put in perspective is 4 250 watt servers uses 720kWh/month and the average house in the US uses 920kWh/month, so it's not really as simple a setup as one might initially think.
I'm not sure if you're aware, but in most countries residential has much (at least twice) lower price per 1kWh, than commercial. Also commercial pays extra for using electricity during the day, and gets electricity cheaper during night. This is why there are some factories in Europe that work only during night.
Industrial settings tend to use very large amounts of energy in a very small area or number of clients so they get cheaper bulk rates for purchasing a lot with little administrative overhead. It's also often the case they can get a high voltage line installed directly to the plant which is expensive to install but increases efficiency dramatically.
These averages may reflect heavy use of off-peak consumption, but most plants I've experienced operate 24/7. Much of it is politics and bargaining for a better rate on the contract.
Yeah! One of the first tech articles I recall reading was back in 1999. It was about how pipeline length influenced CPU clock speeds. You used Pentium II, K6, DEC Alpha's as examples :). All good stuff!
Or you could say that disabling turboboost (by using the power plan “balanced”) results in an 10% throughput disadvantage.
Isn't there a power plan which lets the CPUs turbo up (as max performance does) and also lets them clock down if not needed (as balanced does)? It seems outright stupid to take turbo away from a Nehalem-like chip.
No reason you shouldn't be able to specify a power plan that does both, but for whatever reason it isn't provided out-of-the-box.
I'd guess that given the relatively small difference in idle power between "performance" and "balanced" (which seems to be more of "power capped" plan), maybe they (presumably the OEM?) decided it wasn't worth it.
There may also be stability issues with some system configurations or support concerns, as there's yet another set of variables to deal with.
Johan -- That brings up an interesting question: How much of the underlying CPU's power management are you testing vs. a particular vendor or OS configuration? I'd expect them to closely reflect each other assuming everyone has their job.
As you're using Win 2008, it would be interesting to see what powercfg.exe shows for the various parameters for different modes and systems; e.g., "Busy Adjust Threshold", "Increase Policy", "Time Check", "Increase Percent", "Domain Accounting Policy" etc. Are there significant differences across systems/CPUs for the same profile?
Heat dissipation is also a concern, no? It's expensive to cool a datacenter. Low power should bring cooling costs down.
There's also a question of density. You can fit more low-power cores into 1U of space because of the heat dissipation. Multi-node blades are cheaper than 2U workhorses. Rack space is expensive for a lot of reasons. Just look at the Atom HPC servers: I bet the Atom would score pretty low in performance-per-watt versus even the LP XEON, but its sheer size and thermal envelope fit it in places the XEON can't.
Frankly, I'd be surprised if the low-power XEON saved "energy" at the same workloads versus full-power, given that both are on the same architecture. LP XEONs are really an architecture choice, and greasing the transition to many-cores via horizontal scaling. A good desktop analogy would be, "Is one super-fast core better than a slower multi-core?" Fortunately for the datacenter most servers only need one or the other.
Also, with physical nodes scaling out horizontally, entire nodes can be powered down during down times, with significant power savings. This is software-bound, and I haven't seen this in action yet, but it's a direction nonetheless.
Without getting into all of the details, I think a proper TCO analysis is in order. This article seems to really only touch on the actual power-consumption, where there are really no surprises. The full-power peaks performance a little better, and the LP stays within a tighter thermal-envelope.
The value of low-power is really low-heat in the datacenter. I'd like to see something that covers node density and cooling costs as well. A datacenter with all LP-servers is unrealistic, seeing as how some applications that scale vertically will dictate higher-performing processors. It would be nice to see what the total cost would be for, say a 2,000 node data center with 80% LP population versus 20% LP population. The TDP suggests a 1/3 drop in cooling costs and 1/3 better density.
Good points. At the datacenter level, the IT equipment typically consumes 30% or less of the total power budget; for every 1 watt those servers etc. consume, add 2 watts to your budget for cooling, distribution, etc.
Not to mention that the amortized power provisioning cost can easily exceed the power cost over the life of the equipment (all that PD equipment, UPS's etc cost $$$), or that high power densities have numerous secondary effects (none good, at least in typical air-cooled environments).
I guess it really depends on your perspective: a datacenter end customer with a few racks (or less); a datacenter volume customer/reseller; a datacenter operator/owner; or a business who builds and operates your own private datacenters? Anadtech's analysis is really more appropriate for the smaller datacenter customer, and maybe smaller resellers.
Google published "The Datacenter as a Computer -- An Introduction to the Design of Warehouse-Scale Machines" which touches on many of the issues you mention, and should be of interest to anyone building or operating large systems or datacenters. It's not just about Google and has an excellent set of references; freely available at::
Just a little engineering clarification, one pays for electricity via energy units, not voltage, amperage or even power. Power is an instantaneous measurement, over time.
Electrical energy is almost universally measured/billed in KWh Kilo Watts x hours). Power plants and some rare facilities may use Mega Watt hours.
Why does it make a difference? Actual voltage not only has different standards but there are distribution loses. For example, nominal US 240/120V (most common on newer utility services) may droop considerably. However, in the US, if the building is fed via a three phase supply, nominal will be 208/120 volts!
That meter that measures the energy you pay for measures real power multiplied by time regardless of single phase, three phase, nominal voltage or voltage loses in the distribution system.
Most modern computer power supplies are fairly insensitive to reasonable variations in supply voltage. They have about the same power efficiency at high or low voltage. Thus they will consume the same wattage based given the same load regardless of present voltage.
Since we measure computer power supplies in Watts (capitalized since the unit is named or abbreviated after a persons name as is Volt and AMP or W, V and A). Wh (hours are not named after a person) is the most consistent base unit for energy consumption.
Watts of course are directly converted to units of heat, thus the air conditioning costs are somewhat directly computable.
I should clarify, Watts are NOT voltage times amperage in AC systems! Watts represent real power (billable and real heat). reactive and non-linear components generally draw greater amperage than is billed for if one just multiplies V x A! Look at a power supply sticker, it has a wattage rating and a VA (Voltage x Amperage) rating that do not match!
Also note, UPS systems have different VA and Wattage ratings!
For this reason, costs can only be computed by Watts x Time. (KWh as an example), not Voltage or Amperage.
Hmm.... If your Watts come with power distribution, UPS, cooling and amortization, then maybe "costs can only be computed by Watts x Time".
If not, I think you're going to go broke rather quickly, at least allowing for ~1.25x for power distribution, ~1.5x for UPS, ~2x for cooling, and ~3x for CAPEX/amortization.
Would you settle at 12x your Watts for mine (in volume of course and leaving me a bit of margin)? Counter offer anyone?
True, but as Johan posted earlier, he's looking at it from the datacenter customer perspective, and amps is how it's typically priced, because that's how it's typically provisioned.
The typical standard datacenter rate sheet has three major variables if you're using your own hardware: rack space, network, and power. Rack space is typically RU's with a minimum of 1/4 rack. Network is typically flat rate at a given speed, GB/mo or some combination thereof. Power is typically 110VAC at a given amperage, usually in 5-10A increments (at least in the US). You want different power--same watts but 220 instead of 110? Three-phase? Most will sell it to you, but at an extra charge because they'll probably need a different PDU (and maybe power run) just for you.
They don't generally charge you by the Watt for power, by the bit for network, or anything less than a 1/4 rack because the datacenter operators have to provision in manageable units, and to ensure they can meet SLAs. They have to charge for those units whether you use them or not because they have to provision the infrastructure based on the assumption that you will use them. (And if you exceed them, be prepared for nasty notes or serious additional charges.)
Those PDU's, UPS's, AC units, chillers, etc. don't care if you only used 80% of your allotted watts last month--they had to be sized and provisioned based on what you might have used. That's a fixed overhead that dwarfs the cost of the watts you actually used, that the datacenter operator had to provision and has to recover. If you want/need less than that, some odd increment or on-demand, then you go to a reseller who will sell you a slice of a server, or something like Amazon EC2.
If it makes people feel better (at least for those in the US), simply consider the unit of power/billing as ~396KWH (5A@110VA/mo) or ~792KWH (10A@110VAC/mo). Then multiply by 3, because that's the typical amount of power that's actually used (e.g., for distribution, cooling and UPS) above and beyond what your servers consume. Then multiply that by the $/KWH to come up with the operating cost. Then double it (or more) because that's the amortized cost of all those PDU's, UPS's and the cooling infrastructure in addition to the direct costs of electricity you're using.
In short, for SMB's who are using and paying for colo space and have a relatively small number of servers, the EX Xeon's may make sense, whereas those operating private datacenters or a warehouse full of machines may be better served by LP's.
>True, but as Johan posted earlier, he's looking at it from the datacenter customer perspective, and amps is how it's typically priced, because that's how it's typically provisioned.
Not true at least in the US or UK. Electricity is priced in KWh. Circuit protection is rated (provisioned) by Amps. (a simple Google search or electricity price or cost will reveal that).
Volts x Amps is not equal to Wattage for most electronics. Thus VxAxHours is not what one is billed for.
You're still looking at it from the residential perspective. If you colocate your servers to a data center, you pay *their* prices, not the electric company prices, which means you don't just pay for the power your server uses but you also pay for cooling and other costs. Just a quick example:
Earthnet charges $979 per month for one rack, which includes 42U of space, 175GB of data, 8 IP addresses, and 20 Amps of power use. So if you're going with the 1U server Johan used for testing, each server can use anywhere from 168W to ~300W, or somewhere between 1.46A and 2.61A. For 40 such servers in a rack, you're looking at 58.4A to 104A worst-case. If you're managing power so that the entire rack doesn't go over 60A, at Earthnet you're going to pay and extra $600 per month for the additional 40A. This is all in addition to bandwidth and hardware costs, naturally.
In other words, Earthnet looks to charge business customers roughly $0.18 per Wh, but it's a flat rate where you pay the full $0.18 per Wh for the entire month, assuming a constant 2300W of power draw (i.e. 20A at 115VAC). In reality your servers are probably maxing out at that level and generally staying closer to half the maximum, which means your "realistic" power costs (plus cooling, etc.) work out to nearly $0.40 per kWh.
Data centers aren't exactly using a 400V/240V residential connection. It's 10/11/20 or even 200 kV and their own transformers or so. And of course megawatts of power.
Inside the datacenter it's all based on the connections and amps available though. You just can't have an unlimited load on a power socket from a PDU. And it's mostly metered.
The average human outputs ~300 BTU/hour of heat and significant moisture. Working from home will cut the server facilities air conditioning bill by lowering the heat and humidity the AC system needs to remove!
It is a very nice test, but always those tests are done with extreme low or extreme high bins while the mass isn't buying these parts, In the field you rather see huge piles of for example E5620-30 series and X5650, result differences should be interesting to see or at least for once provide an idea about the scaling in portfolio. (I know this is a difficult one)
Although It is very interesting to see the differences within vendors or between vendors there is one major thing that is missing in all these tests. This review will provide any interested IT a good overview what he could do with power consumption and the performance but what is left out here is the big influence from OEM.
OEM have there own powersavings/regulators/dynamics that influence the server a lot both in OS and BIOS, even often in a very bad way. So while it is an interesting article most IT will never get the result they wanted due to the OEM implementation.
I do agree 100% with the author. Johan is very much clear on what he is more concerned about, the electricity bill. It’s very much true that electricity bills eat up large amount of income of <a href="http://www.indiamaphosting.com/">Internet Marketing India</a> companies and it’s a huge burden from business point of view. This can be tackled by opting for better hardware's which is compatible with green energy and helps in cutting the electricity bill extensively and saves huge sum of money for the organization in the long run.
This is what the internet companies were looking for. Cost Cutting. It is very essential from business POV. Lesser the energy consumption, more the investment on product research. Great numbers exposed. Thanks a lot Johan. I wonder this is why Google started electricity generating stations to power its massive data centers.
Given the results you've found it would be great to see how power capping can influence the workloads as well. The latest DELL Poweredge-C platforms support Intel Intelligent Power Node Manager technology - would be fantastic to have a look!
So first off I have to say that using a home built Machine with an Asus mobo and trying to talk intelligently about Datacenter Power is not really a fair comparison. Stick to the big 3 (HP/Dell/IBM) when doing these kinds of comparisons.
Now the real reason I posted was because you mentioned the speed of the memory you used, but made NO mention of the speed the memory was actually running at.
With the Nehalem and Westmere Xeons, only the X series can run the memory at 1333 while the others (E and L series) start at 1066. When you run more than one bank of memory you can also see your memory frequency decline depending on the server vendor you are using. I think HP has a bit you can flip in their machines that will allow you to run 2 banks @ 1333 (again, assuming X Series proc) but if you don't turn that on, you step down from 1 bank @ 1333 to 2 @ 1066 and even 3 @ 800.
The reason I bring this up is because you said yourself your machine was NOT CPU bound, and you weren't entirely sure why the tests completed with such different times. Well memory performance could be part of that equation.
Lastly, you have to remember that not every server in a DC is running VMWare/HyperV and there are still tons of servers with basic Windows or Linux App/Web workloads running right on the hardware. These kind of servers on average in the industry run less than 10% of the CPU max in a given day (might be spikes for backups and other jobs, but the average is <10%) So if you had a rack with 20 2U Servers and you didn't need VMWare/SQL/Oracle level performance in those racks, why not run them with L series processors and across an entire rack you are saving a decent amount of power.
PS: Where are you guys at AT located? Your "About Us" button up top has been useless for quite some time now. Not sure it could be pulled off, but you should really look into asking the big 3 for demo gear. Getting a Nehalem EX right now is damn near impossible but a Westmere EP would be doable. The problem here is they do loaners to get sales, not to get reviews, so what you really need to do is find some friends who work in IT at very large companies in your area who would be willing to let you get some wrench time on their demo equipment. 60-90 day loans are quite common.
I'm surprised I haven't seen anyone else make a similar comment yet: I've been curious about this for a long time, but I would rather see a comparison between 2 CPUs that are intended to be competitive.
It looks Intel changed things a bit with 5600 series xeons, but previously (including with the 5500's) intel would match up model numbers, cores, and clock speeds. The model with the 'L' prefix would just have a lower TDP. I was always curious if those performed just as well or not?
For example:
E5520 vs L5520 E5530 vs L5530
I see AMD also has some 80W and 65W comparable models if you do end up testing opterons.
That would be the real "is it worth the processor price premium?" question in my opinion. Of course the high end part which doesn't have a comparable "low power" model is going to perform better.. but like someone else said, a typical data center tends to have many more of the midrange parts (like an E5530) installed which also have lower power conterparts at a $200ish premium.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
49 Comments
Back to Article
Zstream - Thursday, July 15, 2010 - link
It kills the AMD low power motto :(duploxxx - Thursday, July 15, 2010 - link
lol, all that you can say about this article is something about AMD. Looks like you need an update on server knowledge, Since the Arrival of Nehalem Intel has the best offer when you need the highest performance parts and when using Low power parts which give still the best performance. Since MC arrived things got a bit different mostly due to aggressive price for all mid value but still a favor to intel parts for highend and L power bins. Certainly in the area of virtualization AMD does very wellWhat is shown here should be known to many people that design virtual environments, Virtualization and low power parts don't match if you run applications that need cpu power and response all the time, L series can only be very useful for a huge bunch of "sleeping" vm's.
Interesting would be to compare with AMD, but 9/10 both low power and high power intel parts will be more interesting when you will only run 1 tile, the huge core amount lower ipc advantage will loose against the higher ipc/core of intel in this battle.
Zstream - Thursday, July 15, 2010 - link
Excuse me? I am quite aware of low power consuming chips. The point AMD has made in the past four to five years is that low power and high performance can match Intel's performance and still save you money. I have been to a number of AMD web conferences and siminars were they state the above.MrSpadge - Thursday, July 15, 2010 - link
Not sure if you're being sarcastic here, as it's obvious AMD would tell you this.
But regarding the actual question: you'd be about right if you compared K8 or Phenom I based Opterons with Core 2 based ones. And you'd be very right if you compared them to Phenom II. However, the performance of these Intels is being held back by the FSB and FB-DIMMs and power efficiency is almost crippled by the FB-DIMMs. But Nehalem changed all of that.
MrS
duploxxx - Friday, July 16, 2010 - link
4-5 years.... Nehalem was launched q12009 since then all changed. Before that Xeon parts suffered from FBDimm powerconsumption and FSB bottleneck and that is why AMD was still king on power/performance and was able to keep up with max performance. Nehalem was king, Istanbul was able to close the gap a bit but missed raw ghz and had higher power needs due to ddr2, again MC parts leveraged back this intel advantage and now there is a choice again, but L power still is king to Neh/Gulf.Penti - Saturday, July 17, 2010 - link
It invalidates low power versions of AMDs also. That's he's point I would believe.stimudent - Thursday, July 15, 2010 - link
Not really.If there can't be two sides to the story or a more diverse perspective, then it should not have been published. Next time, wait a little longer for parts to arrive - try harder next time.
MrSpadge - Friday, July 16, 2010 - link
A comparison to AMD would have been nice, but this article is not Intel vs. AMD!It already has 2 side: high power vs. low power Intels. And Johan found something very important and worthy of reporting. No need to blur the point by including other chips.
MrS
Zstream - Thursday, July 15, 2010 - link
I know we have the VMware results but could someone do an analysis on AMD / INTEL chips?For instance I can get a 12 core AMD chip or a 6 core/12 HT chip from Intel. Has anyone done any test with Terminal Servers or Real world usage of a VM (XP Desktop) with core count?
I would think that a physical 12C vs 6C impacts real world performance by a considerable large amount.
tech6 - Thursday, July 15, 2010 - link
Great work Anandtech - it's about time someone took the low power TCO claims to task.cserwin - Thursday, July 15, 2010 - link
Some props for Johan, too, maybe... nice article.JohanAnandtech - Thursday, July 15, 2010 - link
Thanks! We have more data on "low power choices", but we decided to cut them up in several article to keep it readable.DavC - Thursday, July 15, 2010 - link
not sure whats going on with your electricity cost calcs on your first page. firstly your converting current unnessacarily from watts to amps (meaning your unnessacarily splitting into US and europe figures).basically here in the UK, 1kW which is what your your 4 PCs in your example consume, costs roughly 10p per hour. working on an average of 720 hours in a month, that would give a grand total of £72 a month to run those 4 PCs 24/7.
£72 to you US guys is around $110. And I cant imagine you're electricity is priced any dearer than ours.
giving a 4 year life cycle cost of $5280.
have I missed something obvious here or are you just out with the maths?
JohanAnandtech - Thursday, July 15, 2010 - link
You are calculating from the POV of a datacenter. I take the POV of a datacenter client, which has to pay per amp that he/she "reserves". AFAIK, datacenters almost always count with amps, not Watts.(also 10p per KWh seems low)
MrSpadge - Thursday, July 15, 2010 - link
With P=V*I at constant voltage power and amps are really just a different name for the same thing, i.e. equivalent. Personally I prefer W, because this is what matters in the end: it's what I pay for and what heats my room. Amps by themselves don't mean much (as long as you're not melting the wires), as voltages can easily be converted.Maybe the datacenter guys just like to juggle around smaller numbers? Maybe the should switch over to hecto watts instead? ;)
MrS
JohanAnandtech - Thursday, July 15, 2010 - link
I am surprised the electrical engineers have not jumped in yet :-). As you indicate yourself, the circuits/wires are made for a certain amount of amps, not watts. That is probably the reason datacenters specify the amount of power you get in watt.JohanAnandtech - Thursday, July 15, 2010 - link
I meant amps in that last sentence of course.knedle - Thursday, July 15, 2010 - link
Watts are universal, doesn't matter if you're in UK, or US - 220W is still 220W, but with ampers it's different. Since in the Europe voltage is higher than in the USA (EU=220V, US=110V), and P=U*I, you've got twice as much power for 1A, which means that in USA your server will use 2A, while the same server in UK will use only 1A...has407 - Friday, July 16, 2010 - link
No, not all Watts are the same.Watts in a decent datacenter come with power distribution, cooling, UPS, etc. Those typically add 3-4x to the power your server actually consumes. Add to that the amortized cost of the infrastructure and you're looking at 6-10x the cost of the power your server consumes directly.
Such is the fallacy of simplistic power/cost comparisons (and Johan, you should know better). Can we now dispense with the idiotic cost/KWH calculations?
Penti - Saturday, July 17, 2010 - link
A high-performance server probably can't be used on 1A 230V which is the cheapest options in some datacenters. However something like half a rack or 1/4 would probably have 10A/230V, more then enough for a small servercollection of 4 moderate servers. The big cost is cooling, normal racks might handle 4kW (up to 6kW over that then it's high density) of heat/power just. Then you need more expensive stuff. A cheap rack won't handle 40 250W servers in other regards. 6 kW power/cooling and 2x16A/230V shouldn't be that expensive. Any way you also pay for cooling (and UPS). Even cheap solutions normally charge per used kW here though. 4 2U is about 1/4 rack anyway. And like 15 amps is needed if in the states.WillR - Thursday, July 15, 2010 - link
10p per kWh may be low in the UK but it's not for residential in the US. $.12/kWh is average. Just pulled out a bill from earlier this year and we paid 7.3 cents per kWh at my house. What it comes down to is the data center potentially overcharges people in the, using your numbers, 19 to 38 cents per kWh range, but rates can be higher than $.20/kWh in high density areas like NYC or SF. The extra costs should go to paying for upgrades and expansion of their infrastructure so it's not unreasonable.Worth mentioning to put in perspective is 4 250 watt servers uses 720kWh/month and the average house in the US uses 920kWh/month, so it's not really as simple a setup as one might initially think.
http://www.eia.doe.gov/cneaf/electricity/esr/table... provides a nice table of average rates and usages.
knedle - Thursday, July 15, 2010 - link
I'm not sure if you're aware, but in most countries residential has much (at least twice) lower price per 1kWh, than commercial. Also commercial pays extra for using electricity during the day, and gets electricity cheaper during night.This is why there are some factories in Europe that work only during night.
WillR - Thursday, July 15, 2010 - link
That is not the case in the US. Residential pays higher rates than either Commercial or Industrial users.http://www.eia.doe.gov/cneaf/electricity/epm/table...
Average Retail Price (Cents/kWh)
Items Mar-10 Mar-09
Residential 11.2 11.33
Commercial 10.03 10.07
Industrial 6.5 6.79
Industrial settings tend to use very large amounts of energy in a very small area or number of clients so they get cheaper bulk rates for purchasing a lot with little administrative overhead. It's also often the case they can get a high voltage line installed directly to the plant which is expensive to install but increases efficiency dramatically.
These averages may reflect heavy use of off-peak consumption, but most plants I've experienced operate 24/7. Much of it is politics and bargaining for a better rate on the contract.
DaveSylvia - Thursday, July 15, 2010 - link
Hey Johan, great article as usual! Always enjoyed and appreciated your articles including those from back in the day at Ace's Hardware!JohanAnandtech - Thursday, July 15, 2010 - link
Good memory :-). I have been part of Anand's team for 6 years now, that is the same amount of time that I spend at Ace's.DaveSylvia - Thursday, July 15, 2010 - link
Yeah! One of the first tech articles I recall reading was back in 1999. It was about how pipeline length influenced CPU clock speeds. You used Pentium II, K6, DEC Alpha's as examples :). All good stuff!MrSpadge - Thursday, July 15, 2010 - link
Isn't there a power plan which lets the CPUs turbo up (as max performance does) and also lets them clock down if not needed (as balanced does)? It seems outright stupid to take turbo away from a Nehalem-like chip.
MrS
has407 - Thursday, July 15, 2010 - link
No reason you shouldn't be able to specify a power plan that does both, but for whatever reason it isn't provided out-of-the-box.I'd guess that given the relatively small difference in idle power between "performance" and "balanced" (which seems to be more of "power capped" plan), maybe they (presumably the OEM?) decided it wasn't worth it.
There may also be stability issues with some system configurations or support concerns, as there's yet another set of variables to deal with.
has407 - Thursday, July 15, 2010 - link
Johan -- That brings up an interesting question: How much of the underlying CPU's power management are you testing vs. a particular vendor or OS configuration? I'd expect them to closely reflect each other assuming everyone has their job.As you're using Win 2008, it would be interesting to see what powercfg.exe shows for the various parameters for different modes and systems; e.g., "Busy Adjust Threshold", "Increase Policy", "Time Check", "Increase Percent", "Domain Accounting Policy" etc. Are there significant differences across systems/CPUs for the same profile?
Whizzard9992 - Thursday, July 15, 2010 - link
Heat dissipation is also a concern, no? It's expensive to cool a datacenter. Low power should bring cooling costs down.There's also a question of density. You can fit more low-power cores into 1U of space because of the heat dissipation. Multi-node blades are cheaper than 2U workhorses. Rack space is expensive for a lot of reasons. Just look at the Atom HPC servers: I bet the Atom would score pretty low in performance-per-watt versus even the LP XEON, but its sheer size and thermal envelope fit it in places the XEON can't.
Frankly, I'd be surprised if the low-power XEON saved "energy" at the same workloads versus full-power, given that both are on the same architecture. LP XEONs are really an architecture choice, and greasing the transition to many-cores via horizontal scaling. A good desktop analogy would be, "Is one super-fast core better than a slower multi-core?" Fortunately for the datacenter most servers only need one or the other.
Also, with physical nodes scaling out horizontally, entire nodes can be powered down during down times, with significant power savings. This is software-bound, and I haven't seen this in action yet, but it's a direction nonetheless.
Without getting into all of the details, I think a proper TCO analysis is in order. This article seems to really only touch on the actual power-consumption, where there are really no surprises. The full-power peaks performance a little better, and the LP stays within a tighter thermal-envelope.
The value of low-power is really low-heat in the datacenter. I'd like to see something that covers node density and cooling costs as well. A datacenter with all LP-servers is unrealistic, seeing as how some applications that scale vertically will dictate higher-performing processors. It would be nice to see what the total cost would be for, say a 2,000 node data center with 80% LP population versus 20% LP population. The TDP suggests a 1/3 drop in cooling costs and 1/3 better density.
has407 - Thursday, July 15, 2010 - link
Good points. At the datacenter level, the IT equipment typically consumes 30% or less of the total power budget; for every 1 watt those servers etc. consume, add 2 watts to your budget for cooling, distribution, etc.Not to mention that the amortized power provisioning cost can easily exceed the power cost over the life of the equipment (all that PD equipment, UPS's etc cost $$$), or that high power densities have numerous secondary effects (none good, at least in typical air-cooled environments).
I guess it really depends on your perspective: a datacenter end customer with a few racks (or less); a datacenter volume customer/reseller; a datacenter operator/owner; or a business who builds and operates your own private datacenters? Anadtech's analysis is really more appropriate for the smaller datacenter customer, and maybe smaller resellers.
Google published "The Datacenter as a Computer -- An Introduction to the Design of Warehouse-Scale Machines" which touches on many of the issues you mention, and should be of interest to anyone building or operating large systems or datacenters. It's not just about Google and has an excellent set of references; freely available at::
http://www.morganclaypool.com/doi/abs/10.2200/S001...
Whizzard9992 - Monday, July 19, 2010 - link
Very cool article. Bookmarked, though I'm not sure I'll ever read the whole thing. Scanning for gems now. Thanks!gerry_g - Thursday, July 15, 2010 - link
Just a little engineering clarification, one pays for electricity via energy units, not voltage, amperage or even power. Power is an instantaneous measurement, over time.Electrical energy is almost universally measured/billed in KWh Kilo Watts x hours). Power plants and some rare facilities may use Mega Watt hours.
Why does it make a difference? Actual voltage not only has different standards but there are distribution loses. For example, nominal US 240/120V (most common on newer utility services) may droop considerably. However, in the US, if the building is fed via a three phase supply, nominal will be 208/120 volts!
That meter that measures the energy you pay for measures real power multiplied by time regardless of single phase, three phase, nominal voltage or voltage loses in the distribution system.
Most modern computer power supplies are fairly insensitive to reasonable variations in supply voltage. They have about the same power efficiency at high or low voltage. Thus they will consume the same wattage based given the same load regardless of present voltage.
Since we measure computer power supplies in Watts (capitalized since the unit is named or abbreviated after a persons name as is Volt and AMP or W, V and A). Wh (hours are not named after a person) is the most consistent base unit for energy consumption.
Watts of course are directly converted to units of heat, thus the air conditioning costs are somewhat directly computable.
Just trivia but consistent ;)
gerry_g - Thursday, July 15, 2010 - link
I should clarify, Watts are NOT voltage times amperage in AC systems! Watts represent real power (billable and real heat). reactive and non-linear components generally draw greater amperage than is billed for if one just multiplies V x A! Look at a power supply sticker, it has a wattage rating and a VA (Voltage x Amperage) rating that do not match!Also note, UPS systems have different VA and Wattage ratings!
For this reason, costs can only be computed by Watts x Time. (KWh as an example), not Voltage or Amperage.
has407 - Friday, July 16, 2010 - link
Hmm.... If your Watts come with power distribution, UPS, cooling and amortization, then maybe "costs can only be computed by Watts x Time".If not, I think you're going to go broke rather quickly, at least allowing for ~1.25x for power distribution, ~1.5x for UPS, ~2x for cooling, and ~3x for CAPEX/amortization.
Would you settle at 12x your Watts for mine (in volume of course and leaving me a bit of margin)? Counter offer anyone?
has407 - Thursday, July 15, 2010 - link
True, but as Johan posted earlier, he's looking at it from the datacenter customer perspective, and amps is how it's typically priced, because that's how it's typically provisioned.The typical standard datacenter rate sheet has three major variables if you're using your own hardware: rack space, network, and power. Rack space is typically RU's with a minimum of 1/4 rack. Network is typically flat rate at a given speed, GB/mo or some combination thereof. Power is typically 110VAC at a given amperage, usually in 5-10A increments (at least in the US). You want different power--same watts but 220 instead of 110? Three-phase? Most will sell it to you, but at an extra charge because they'll probably need a different PDU (and maybe power run) just for you.
They don't generally charge you by the Watt for power, by the bit for network, or anything less than a 1/4 rack because the datacenter operators have to provision in manageable units, and to ensure they can meet SLAs. They have to charge for those units whether you use them or not because they have to provision the infrastructure based on the assumption that you will use them. (And if you exceed them, be prepared for nasty notes or serious additional charges.)
Those PDU's, UPS's, AC units, chillers, etc. don't care if you only used 80% of your allotted watts last month--they had to be sized and provisioned based on what you might have used. That's a fixed overhead that dwarfs the cost of the watts you actually used, that the datacenter operator had to provision and has to recover. If you want/need less than that, some odd increment or on-demand, then you go to a reseller who will sell you a slice of a server, or something like Amazon EC2.
If it makes people feel better (at least for those in the US), simply consider the unit of power/billing as ~396KWH (5A@110VA/mo) or ~792KWH (10A@110VAC/mo). Then multiply by 3, because that's the typical amount of power that's actually used (e.g., for distribution, cooling and UPS) above and beyond what your servers consume. Then multiply that by the $/KWH to come up with the operating cost. Then double it (or more) because that's the amortized cost of all those PDU's, UPS's and the cooling infrastructure in addition to the direct costs of electricity you're using.
In short, for SMB's who are using and paying for colo space and have a relatively small number of servers, the EX Xeon's may make sense, whereas those operating private datacenters or a warehouse full of machines may be better served by LP's.
gerry_g - Friday, July 16, 2010 - link
>True, but as Johan posted earlier, he's looking at it from the datacenter customer perspective, and amps is how it's typically priced, because that's how it's typically provisioned.Not true at least in the US or UK. Electricity is priced in KWh. Circuit protection is rated (provisioned) by Amps. (a simple Google search or electricity price or cost will reveal that).
Volts x Amps is not equal to Wattage for most electronics. Thus VxAxHours is not what one is billed for.
JarredWalton - Friday, July 16, 2010 - link
You're still looking at it from the residential perspective. If you colocate your servers to a data center, you pay *their* prices, not the electric company prices, which means you don't just pay for the power your server uses but you also pay for cooling and other costs. Just a quick example:Earthnet charges $979 per month for one rack, which includes 42U of space, 175GB of data, 8 IP addresses, and 20 Amps of power use. So if you're going with the 1U server Johan used for testing, each server can use anywhere from 168W to ~300W, or somewhere between 1.46A and 2.61A. For 40 such servers in a rack, you're looking at 58.4A to 104A worst-case. If you're managing power so that the entire rack doesn't go over 60A, at Earthnet you're going to pay and extra $600 per month for the additional 40A. This is all in addition to bandwidth and hardware costs, naturally.
In other words, Earthnet looks to charge business customers roughly $0.18 per Wh, but it's a flat rate where you pay the full $0.18 per Wh for the entire month, assuming a constant 2300W of power draw (i.e. 20A at 115VAC). In reality your servers are probably maxing out at that level and generally staying closer to half the maximum, which means your "realistic" power costs (plus cooling, etc.) work out to nearly $0.40 per kWh.
Penti - Saturday, July 17, 2010 - link
Data centers aren't exactly using a 400V/240V residential connection. It's 10/11/20 or even 200 kV and their own transformers or so. And of course megawatts of power.Inside the datacenter it's all based on the connections and amps available though. You just can't have an unlimited load on a power socket from a PDU. And it's mostly metered.
gerry_g - Thursday, July 15, 2010 - link
Humor intended...The average human outputs ~300 BTU/hour of heat and significant moisture. Working from home will cut the server facilities air conditioning bill by lowering the heat and humidity the AC system needs to remove!
duploxxx - Friday, July 16, 2010 - link
It is a very nice test, but always those tests are done with extreme low or extreme high bins while the mass isn't buying these parts, In the field you rather see huge piles of for example E5620-30 series and X5650, result differences should be interesting to see or at least for once provide an idea about the scaling in portfolio. (I know this is a difficult one)Although It is very interesting to see the differences within vendors or between vendors there is one major thing that is missing in all these tests. This review will provide any interested IT a good overview what he could do with power consumption and the performance but what is left out here is the big influence from OEM.
OEM have there own powersavings/regulators/dynamics that influence the server a lot both in OS and BIOS, even often in a very bad way. So while it is an interesting article most IT will never get the result they wanted due to the OEM implementation.
indiamap - Friday, July 16, 2010 - link
I do agree 100% with the author. Johan is very much clear on what he is more concerned about, the electricity bill. It’s very much true that electricity bills eat up large amount of income of <a href="http://www.indiamaphosting.com/">Internet Marketing India</a> companies and it’s a huge burden from business point of view. This can be tackled by opting for better hardware's which is compatible with green energy and helps in cutting the electricity bill extensively and saves huge sum of money for the organization in the long run.indiamap - Friday, July 16, 2010 - link
This is what the internet companies were looking for. Cost Cutting. It is very essential from business POV. Lesser the energy consumption, more the investment on product research. Great numbers exposed. Thanks a lot Johan. I wonder this is why Google started electricity generating stations to power its massive data centers.For more information, please visit: http://www.indiamaphosting.com/
mino - Friday, July 16, 2010 - link
Johan - great article.Keep it up !
Whizzard9992 - Monday, July 19, 2010 - link
Can we please clean up the spam here? Where's the "REPORT" button?Toadster - Friday, July 16, 2010 - link
Given the results you've found it would be great to see how power capping can influence the workloads as well. The latest DELL Poweredge-C platforms support Intel Intelligent Power Node Manager technology - would be fantastic to have a look!Keep up the great work - excellent article!
Casper42 - Saturday, July 17, 2010 - link
So first off I have to say that using a home built Machine with an Asus mobo and trying to talk intelligently about Datacenter Power is not really a fair comparison. Stick to the big 3 (HP/Dell/IBM) when doing these kinds of comparisons.Now the real reason I posted was because you mentioned the speed of the memory you used, but made NO mention of the speed the memory was actually running at.
With the Nehalem and Westmere Xeons, only the X series can run the memory at 1333 while the others (E and L series) start at 1066. When you run more than one bank of memory you can also see your memory frequency decline depending on the server vendor you are using. I think HP has a bit you can flip in their machines that will allow you to run 2 banks @ 1333 (again, assuming X Series proc) but if you don't turn that on, you step down from 1 bank @ 1333 to 2 @ 1066 and even 3 @ 800.
The reason I bring this up is because you said yourself your machine was NOT CPU bound, and you weren't entirely sure why the tests completed with such different times. Well memory performance could be part of that equation.
Lastly, you have to remember that not every server in a DC is running VMWare/HyperV and there are still tons of servers with basic Windows or Linux App/Web workloads running right on the hardware. These kind of servers on average in the industry run less than 10% of the CPU max in a given day (might be spikes for backups and other jobs, but the average is <10%)
So if you had a rack with 20 2U Servers and you didn't need VMWare/SQL/Oracle level performance in those racks, why not run them with L series processors and across an entire rack you are saving a decent amount of power.
PS: Where are you guys at AT located? Your "About Us" button up top has been useless for quite some time now. Not sure it could be pulled off, but you should really look into asking the big 3 for demo gear. Getting a Nehalem EX right now is damn near impossible but a Westmere EP would be doable. The problem here is they do loaners to get sales, not to get reviews, so what you really need to do is find some friends who work in IT at very large companies in your area who would be willing to let you get some wrench time on their demo equipment. 60-90 day loans are quite common.
-Casper42
tjohn46 - Tuesday, July 20, 2010 - link
I'm surprised I haven't seen anyone else make a similar comment yet: I've been curious about this for a long time, but I would rather see a comparison between 2 CPUs that are intended to be competitive.It looks Intel changed things a bit with 5600 series xeons, but previously (including with the 5500's) intel would match up model numbers, cores, and clock speeds. The model with the 'L' prefix would just have a lower TDP. I was always curious if those performed just as well or not?
For example:
E5520 vs L5520
E5530 vs L5530
I see AMD also has some 80W and 65W comparable models if you do end up testing opterons.
That would be the real "is it worth the processor price premium?" question in my opinion. Of course the high end part which doesn't have a comparable "low power" model is going to perform better.. but like someone else said, a typical data center tends to have many more of the midrange parts (like an E5530) installed which also have lower power conterparts at a $200ish premium.
eva2000 - Saturday, July 31, 2010 - link
interesting to see how they compare when it comes to linux OS i.e. centos 5.5 or redhat 5.5 :)